entry_id
stringlengths
33
33
published
stringlengths
14
14
title
stringlengths
15
199
authors
list
primary_category
stringlengths
5
18
categories
list
text
stringlengths
1
461k
http://arxiv.org/abs/2307.02350v2
20230705151022
The ABJM Hagedorn Temperature from Integrability
[ "Simon Ekhammar", "Joseph A. Minahan", "Charles Thull" ]
hep-th
[ "hep-th" ]
calc decorations.pathreplacing
http://arxiv.org/abs/2307.05393v1
20230706034818
Pattern and Polarization Diversity Multi-Sector Annular Antenna for IoT Applications
[ "Abel Zandamela", "Nicola Marchetti", "Max J. Ammann", "Adam Narbudowicz" ]
eess.SP
[ "eess.SP", "cs.SY", "eess.SY" ]
Pattern and Polarization Diversity Multi-Sector Annular Antenna for IoT Applications Abel Zandamela, Graduate Student Member, IEEE, Nicola Marchetti, Senior Member, IEEE, Max J. Ammann, Fellow, IEEE, and Adam Narbudowicz, Senior Member, IEEE This work was supported by Science Foundation Ireland under Grant 18/SIRG/5612. A. Zandamela, N. Marchetti, and A. Narbudowicz are with CONNECT Centre, Trinity College Dublin, The University of Dublin, Dublin 2, Ireland (email: {zandamea, nicola.marchetti, narbudoa}@tcd.ie). A. Narbudowicz is also with the Department of Telecommunications and Teleinformatics, Wroclaw University of Science and Technology, Wroclaw 50-370, Poland. Max J. Ammann is with the Antenna and High Frequency Research Centre, School of Electrical and Electronic Engineering, Technological University Dublin, D07ADY7 Dublin, Ireland (e-mail: [email protected]). August 1, 2023 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== This work proposes a small pattern and polarization diversity multi-sector annular antenna with electrical size and profile of ka=1.2 and 0.018λ, respectively. The antenna is planar and comprises annular sectors that are fed using different ports to enable digital beamforming techniques, with efficiency and gain of up to 78% and 4.62 dBi, respectively. The cavity mode analysis is used to describe the design concept and the antenna diversity. The proposed method can produce different polarization states (e.g. linearly and circularly polarized patterns), and pattern diversity characteristics covering the elevation plane. Owing to its small electrical size, low-profile and diversity properties, the solution shows good promise to enable advanced radio applications like wireless physical layer security in many emerging and size-constrained Internet of Things (IoT) devices. Pattern diversity, small antennas, polarization diversity, beamforming, annular sector antennas, small IoT devices. § INTRODUCTION THE ability of an antenna to alter its radiated pattern and polarization characteristics is a highly desirable property in many wireless systems<cit.>. In that regard, the rapid advance of the Internet of Things (IoT) technology creates an increasing need for small pattern and polarization diversity antennas to support cutting-edge applications like on/off-body communications, medical body area networks, localization, physical layer security, and others <cit.>. Small pattern diversity antennas have been proposed, e.g., in <cit.>. In <cit.>, a unilateral turnstile-shaped patch antenna is proposed for azimuth plane beamsteering; the antenna has size ka=1.32, and profile 0.024λ (where a is the radius of the smallest sphere that completely encloses the antenna at the center operating frequency f=c/λ, k = 2π/λ is the free space wavenumber, and λ is the wavelength). Magnetic and electric near-field resonant parasitic elements are used for pattern diversity in <cit.> (size: ka=0.98, and profile: 0.0026λ). A shared-aperture pattern diverse quasi-Yagi antenna with size ka=3 and a profile of 0.005λ is proposed in <cit.>. A modified array structure (size: ka=0.98 and profile: 0.004λ) is used to switch between unidirectional and omnidirectional patterns in <cit.>. Further studies in miniaturized antennas are presented, e.g., in <cit.>. A quad-polarized Huygens dipole antenna with ka=0.944 and profile 0.044λ is discussed in <cit.>. The theory of characteristics modes is used for quad-polarization in <cit.>, with size ka=2.36 and a profile of 0.008λ. The work in <cit.> proposes a tri-polarized planar antenna with ka=2 and a profile of 0.04λ. While the above-discussed designs present significant advances in efficiency, bandwidth, and single-diversity performance, some works either still have a relatively large electrical size or profile <cit.>, or the pattern diversity is restricted to a few discrete states that cannot be activated simultaneously, e.g., in <cit.>. This limits the implementation of many emerging and advanced wireless radio applications like localization and physical layer security techniques in small IoT devices. It should be highlighted that all the above works either allow for pattern or polarization diversity. Therefore, it is difficult to design a dual-diversity antenna with a small electrical size and low-profile; however, such antennas have the potential to enable many advanced wireless applications. In recent years, only a few works have proposed pattern and polarization diverse compact antennas, e.g., in <cit.>. The work in <cit.>, proposed a three-layer Yagi patch antenna with ±45 dual-polarization and pattern diversity (size: ka=5.2 and profile 0.08λ). In <cit.>, a radiating patch antenna is combined with diagonal metal walls for multi-polarization and multi-directional beams (size: ka=2 and profile: 0.14λ). In <cit.>, a dual-polarized beamsteering active reflection metasurface antenna is proposed (size: ka=4.68, and profile 0.4λ). Lastly, an aperture-stacked patch with cross-shaped parasitic strips is used for dual-polarization and pattern diversity in <cit.> (size: ka=5.3 and profile: 0.23λ). While the above designs can enable pattern and polarization diversity, their dimensions are still relatively large (i.e. ≥2ka) for size-constrained IoT applications. In addition, these structures are not planar, which can limit their integration in many emerging IoT systems. Microstrip patch antennas are a well-known solution for designing low-cost, low-profile, and planar antennas, which are some of the key requirements for IoT applications. For a microstrip patch antenna occupying an area that can be bounded by a region of radius r, one can design a sector microstrip patch by using only a portion of the patch area. The sector area is then given by A_sector=π r^2(α/2π), where α is the sector angle. Such structures can be advantageous over standard shapes like circular, annular rings, or rectangular shapes in terms of size and profile miniaturization, as well as simpler design structures. Sector microstrip antennas have been proposed, e.g., in <cit.>. In <cit.>, a driven circular sector and annular sector directors are used to design a quasi-Yagi array. In <cit.>, a sector annular ring with a coupled sector patch is used for dual-band operation. Circular sector antennas are investigated for bandwidth enhancement and antenna miniaturization in <cit.>, with null-scanning in <cit.>, tilted circular polarization in <cit.>, and low cross-polarization in <cit.>. However, pattern and polarization diversity are not realized in the above designs, limiting their use for advanced radio applications in emerging small IoT devices. In this work, we propose for the first time a small pattern and polarization diversity multi-sector annular microstrip patch antenna. The design comprises four-ports and operates near 4GHz, with electrical size ka=1.2, low-profile of 0.018λ, and a 10dB and 6dB Impedance Bandwidth (IBW) of 11MHz and 79MHz, respectively. The antenna is planar and exploits annular sectors, slits, and vias-based mutual coupling enhancement methods to realize a simpler design configuration with different polarization, e.g., linear, circular, and ±45 dual-polarization. In addition, good beamsteering characteristics covering the elevation plane are also demonstrated with an antenna gain of up to 4.62dBi. At first, the design principle of an annular sector antenna is presented. Then a multiport design is derived, and the mutual coupling enhancement techniques are discussed. Next, the diversity characteristics are described. Finally, experimental results are presented to validate the proposed design concept. § WORKING PRINCIPLE AND ANTENNA CONFIGURATION §.§ Design Principle For an annular sector microstrip antenna printed on an electrically thin substrate, i.e., t≪λ (where t is the substrate thickness), the cavity model can be used to determine the generated electric fields at a point (ρ,ϕ) <cit.> E (ρ,ϕ) = jωμẑ∑_m,nψ_mn (ρ,ϕ) ψ_mn(ρ',ϕ')/k^2 - k_mv^2 where k is the wavenumber, (ρ',ϕ') is the feed point in polar coordinates, μ is the permeability of the medium, and the eigenfunctions are computed as <cit.> ψ_mn = [ J_v (k_mvρ)N'_v(k_mvr_i) - J'_v(k_mvr_i)N_v(k_mvρ) ] cosv ϕ where J_v and N_v represent the cylindrical functions of the first and second kind (Bessel and Newman functions) of order v, respectively; v=nπ/α, α is the sector angle, k_mv are the resonant wave numbers, which are solutions of J'_v (k_mvr_i)N'_v(k_mvr_e) = J'_v (k_mvr_e)N'_v(k_mvr_i) where r_i and r_e are the annular sectors' inner and outer radii, respectively. The corresponding resonance frequency for the specific mode can then be approximated by f=ck_mv/(2π r_e √(ϵ_r)), where c is the light velocity in vacuum, and ϵ_r is the substrate relative permittivity. The inset in fig:patch_A0a shows Ant. A, a simple annular sector antenna with α=π/2, n=1, r_i=1.5mm, and r_e=14mm. The sector is printed on a TMM6 substrate (ϵ_r=6.3, tanδ=0.0023, and thickness t=1.27mm). The S-parameters are shown in fig:patch_A0a, where the fundamental mode resonates at f_1=4.2GHz, and the second mode at f_2=5.35GHz. The surface current distribution at different time points is shown in fig:patch_A0b (for mode 1) and fig:patch_A0c (for mode 2), with the current direction highlighted for easier observation. It is noted that for mode 1 (at f_1), strong currents are seen at the center of the sector, and they are basically in the y direction. At phase angle t=0, the flowing direction is towards +y, while they flow towards -y for the next two-quarters of the period and return to +y direction for phase angle t=3T/4; this configuration then produces a linearly polarized broadside pattern with the main beam in the xz-plane [fig:patch_A0a (top)]. For mode 2 (at f_2), the currents are along the x direction. It is seen that the currents are stronger near the center and closer to the edges of the inner radius; at t=0 strong currents are seen at the center, and the flow is towards the +x direction; they then flow towards the -x direction for the next two-quarters of the period reaching another state with strong currents at t=T/2, while returning to +x for t=3T/4. This indicates that strong currents are seen at every one-quarter of a period with a change in direction of the currents, which produces a linearly polarized beam pattern but with the main beam pointing at broadside θ=0. In both cases: |S_11|> 16dB, and the total efficiency is >85%. This performance is realized with total dimensions: 28mm×28mm×1.34mm or 0.39λ× 0.39λ× 0.018λ, where λ is the wavelength at f_1=4.2GHz. §.§ Multiport Design In the next design step, three additional annular sectors are integrated into the antenna volume. The total electric field of the antenna at r>8r_e^2/λ can then be approximated as the superposition of the L=4 annular sectors using E_tot (r,θ,ϕ)= ∑_l=1^Lc_l E_l where c_l=|A_l|e^jΔβ_l is the excitation coefficient of the lth annular sector; |A_l|, Δβ_l are the amplitude and phase excitation, respectively; E_l is lth sector electric field obtained from (<ref>). To allow separation between the annular sectors, four slits of width w_1=0.5mm, and length l_1=12.5mm are used (see fig:patch_A1). The coordinates of the feed point are P1(x,y)= P1(-6.5mm,2.3mm), and the feed of the remaining ports are obtained by rotating P1 by 90 with respect to the center of the substrate. The mutual coupling |S_mn| between different sectors is obtained from the finite element method 3D full-wave solver. For the multiport Ant. B (inset of fig:patch_A1), which uses the basic structure of Ant. A, the |S_mn|<4dB and are shown in fig:patch_A1. Such mutual coupling values are considered insufficient for many applications, as they can significantly reduce the overall system performance, e.g., antenna gain and beamsteering characteristics. To improve |S_mn|, a reactive loading in the form of shorting pins is introduced, which also allows for improved frequency tuning. From Ant. B, a single pin of radius r is placed at d_1=1mm from the edge of the outer diameter (see fig:patch_A2); because of the proximity to the edge, this value should be adjusted for larger pin radius. Next, different r values 0.125 - 0.5mm are tested and |S_mn| stays below 5.2dB in all cases. The value d_1 is also tested from 1 -3mm (with r=0.25mm), and the |S_mn| also stays below 5.2dB. In the next step, a second pin also of radius r=0.25mm is introduced at distance d_2 (see fig:patch_A2); this value is tested from 1-7mm, and d_2=4.5mm with |S_mn|<6.2dB is chosen. Further mutual coupling enhancement is realized by tuning the feed position, tuning the gap between each sector (w_1), and adding slits of length (l_2) and width (w_2). The slits are placed at distance s from the sector side; also note that the surface currents' path will increase as they flow around the slit, providing miniaturization. For further size reduction, a second slit with a similar configuration is introduced at the opposite side of the sector. Note that techniques like high-permittivity loading can also be used with the proposed antenna. However, for lower ka values, increased antenna losses are observed, with decreased total efficiency and deterioration of the coupling characteristics. fig:param_studies shows parametric studies of Ant. C. For brevity, only |S_11| and the coupling between the opposite sector |S_31| and one adjacent sector |S_21| are shown. fig:slit_Length shows the |S_mn| for different l_2 values for s=1.3mm. It is seen that |S_mn|>11dB is realized for l_2=8mm, and the center frequency increases for lower l_2 values. The parametric results for w_2 are shown in fig:slit_Width; it is observed that w_2=0.4mm achieves better isolation characteristics. fig:FeedPos shows the results for different feed positions along the slit length, and the optimal feed location is (7.1mm, 1.6mm). The gap between each sector is also tested for different values and the results are shown in fig:w1_Parametric. It can be seen that the center frequency increases for larger w_1 values, and |S_mn|>11dB is realized for w_1=0.35mm. The |S_mn| with optimized parameters are shown in fig:patch_A2, the frequency is lowered from f_1=4.25GHz (Ant.B) to f_1=4.065GHz (Ant. C), where the total efficiency is >60% and |S_mn|>11.7dB. The 10dB and 6dB impedance bandwidths (IBW) are respectively 36.5MHz and 82MHz, and the total efficiency is >40% across the entire 6dB impedance bandwidth region. The antenna dimensions are 0.38λ× 0.38λ× 0.018λ, and ka=1.2. § PATTERN AND POLARIZATION DIVERSITY fig:PortsPattern shows the radiation pattern for each sector of the proposed antenna, and the peak realized gain is around 2.64dBi for each case. Using (<ref>), different states are created to obtain pattern and polarization diversity. When P1 and P4 are simultaneously excited with no additional phase-shifts, i.e. P1 (|A_1|e^jΔβ_1=|A_1|) and P4 (|A_4|e^jΔβ_4=|A_4|), sectors 1 and 4 generate a radiation pattern with the main beam in the xz-plane Quadrant I (see fig:P1P4_P2P3). When P2 and P3 are excited simultaneously, they produce a pattern with the main beam in the xz-plane Quadrant II, or at 70 with respect to the main beam of P1 and P4 (i.e., θ_P2P3= θ_P1P4 + 70). To cover the yz-plane (ϕ=90), P1 (|A_1|e^jΔβ_1=|A_1|) and P2 (|A_2|e^jΔβ_2=|A_2|) are excited simultaneously to create a pattern with the main beam in the yz-plane Quadrant I (see fig:P1P2_P3P4); and a second pattern in Quadrant II, can be generated by simultaneously exciting P3 and P4. fig:monop_reconf shows a horizontally polarized omnidirectional pattern obtained by simultaneous excitation of all the antenna ports without additional phase shifts in each excited port. fig:broad_lp shows a linearly polarized pattern with the main beam pointing at θ=0. The pattern is generated by exciting pairs (P1, P3) and (P2, P4) with a 180 phase difference between the ports; this is (|A_1|e^jΔβ_1=|A_1|, |A_3|e^jΔβ_3=-|A_3|), and (|A_2|e^jΔβ_1=|A_2|, |A_4|e^jΔβ_4=-|A_4|). It is also interesting to note that the excitation of only one pair will produce a ± 45 polarization. In this case exciting only P1 and P3 (with 180 phase difference) will produce a broadside pattern with -45 polarization and the pair P2 and P4 (with 180 phase difference) will generate a +45 polarization. The results in fig:rhcp_reconf also show the polarization diversity of the proposed antenna. To realize circular polarization, a sequential feed with 90 phase difference between each adjacent port is used, i.e. P1 (|A_1|, Δβ_1=0), P2 (|A_2|, Δβ_2=90), P3 (|A_3|, Δβ_3=180), and P4 (|A_4|, Δβ_4=270). The configuration produces a broadside Right-Hand CP (RHCP) pattern (see fig:ARsimula). The axial ratio of the obtained patterns is <1dB within the entire 6dB IBW (see fig:ARsimulb). Note that a Left-Hand CP (LHCP) pattern can be obtained by reversing the above-presented phase configuration. Lastly, Table I outlines each port excitation value to generate the pattern and polarization diversity characteristics of the proposed design. § EXPERIMENTAL RESULTS To validate the proposed concept, the antenna was manufactured using LPKF ProtoMat M60 and is shown in fig:ant_prototype. The S-parameters measurements were conducted using a four-port VNA (R&S ZVA40). It can be seen that the center frequency shifted upwards from 4.065GHz (simulated case, fig:patch_A2) to 4.16GHz (see fig:meas_spara). The 10dB IBW changes from 36.5MHz in simulations to 11MHz in measurements; while the 6dB IBW changes from 82MHz in simulations, to 79MHz in measurements. These discrepancies are most likely due to manufacturing and permittivity tolerances. Anechoic chamber measurements were conducted and the measurement setup is shown in fig:ant_prototypec. The measured patterns (xz-plane, ϕ=0) are shown in fig:meas_ports at f_1=4.16GHz for each respective port. Overall, it is seen that the measured patterns have comparable properties with the respective simulated cases, with their main beams pointing towards: θ=-23 (in simulations) and θ=-25 (in measurements) for port 1 as shown in fig:meas_portsa; θ=21(in simulations) and θ=19 in the measured case, for port 2 (see fig:meas_portsb); for port 3, θ=23 for both the simulated and measured cases (fig:meas_portsc); and finally for port 4, θ=-21 for both the simulated and measured cases (fig:meas_portsd). The small beamwidth discrepancies seen in the measured patterns, the main beam direction for ports 1 and 2, are most likely due to antenna holder reflections during the measurements (see fig:ant_prototypec). fig:RealGain shows the comparisons between the measured and simulated realized gain for each port. At the center frequency of each case (i.e., 4.065GHz in simulations and 4.16GHz in measurements), the peak realized gain is 2.64dBi in simulations, and it slightly decreases to 2.21dBi in measurements. Such discrepancy may be attributed to manufacturing and substrate tolerances and reflections due to the antenna holder during measurements. fig:meas_p1p4_p2p3 shows pattern diversity in the xz-plane, fig:meas_p1p4_p2p3a shows the radiation pattern for θ=-35 direction, and the θ=35 radiation pattern is shown in fig:meas_p1p4_p2p3b. Overall, good agreement is obtained between the simulated and measured cases, and the beamwidth differences may be explained by the beamwidth discrepancies highlighted in fig:meas_ports. fig:meas_polar_reconfa shows a horizontally polarized omnidirectional pattern. In this case, too, a good agreement is obtained between the simulated and measured cases, with the small shouldering and deeps most likely resulting from the beamwidth discrepancies seen in fig:meas_ports. A linearly polarized broadside pattern is shown in fig:meas_polar_reconfb. The results also demonstrate good agreement with their counterpart simulated values, validating the efficacy of the proposed design concept. To highlight the novelty and main advantages of the proposed antenna, a comparison with previously published works is outlined in tab:survey. The works in <cit.>, either allow for different beam patterns or polarization diversity. Even though the works in <cit.> realize different beam patterns and polarization states, it can be seen that our proposed solution is planar and is capable of pattern and polarization diversity with a structure of only ka=1.2 and 0.018λ profile. To further complement the above comparisons, fig:TheorGain shows the Harrington maximum theoretical gain (G_max) computed with respect to the size of the smallest sphere that fully encloses each proposed antenna <cit.>. It is observed that the multi-sector design realizes 4.62dBi gain; therefore, it quickly approaches its maximum theoretical gain of 5.84dBi as compared to other previously published pattern and polarization diversity antennas, making it a good candidate to enable advanced radio applications in emerging small Internet of Things devices. § CONCLUSION A very low-profile (0.018λ), compact size (ka=1.2), pattern, and polarization diversity annular sector-based antenna, with a gain close to its Harrington maximum gain, was presented. The diversity characteristics are realized by analysis of the modes excited in an annular sector antenna. By using a simple mutual coupling enhancement technique based on slits and vias loading, four concentrically rotated annular sectors are excited using four different ports. The proposed method realizes pattern diversity covering the elevation plane and generates a horizontally polarized omnidirectional pattern, a broadside pattern with linear, circular, and a ±45 dual-polarization. A prototype was manufactured and tested, and a good agreement was demonstrated between the measured and simulated results, validating the design concept. The diversity characteristics were achieved using a planar structure designed on a single printed layer without requiring externally controlled switches. The structure is simple and compact and supports pattern and polarization diversity with simultaneous excitation of multiple beams within a structure requiring only ka=1.2. Therefore, it offers performance and size to allow advanced wireless radio applications in emerging size-constrained Internet of Things devices. IEEEtran
http://arxiv.org/abs/2307.00616v1
20230702165157
Low-energy tail of the spectral density for a particle interacting with a quantum phonon bath
[ "Donghwan Kim", "Bertrand I. Halperin" ]
cond-mat.other
[ "cond-mat.other", "cond-mat.quant-gas" ]
#1#2#1#2 #⃗1#1 #1Fig. <ref> #1Tab. <ref> 𝐪𝐩𝐀𝐚𝐫𝐃𝐛𝐞𝐣𝐤𝐅𝐯𝐬𝐆𝐛𝐬𝐊ν_inq_inν_outq_outm^* effα Department of Chemistry and Chemical Biology, Harvard University, Cambridge, Massachusetts 02138, [email protected] of Physics, Harvard University, Cambridge, Massachusetts 02138, USA We describe two approximation methods designed to capture the leading behavior of the low-energy tail of the momentum-dependent spectral density A(, E) and the tunneling density of states D(E) for an injected particle, such as an electron or an exciton, interacting with a bath of phonons at a non-zero initial temperature T, including quantum corrections due to the non-zero frequencies of the relevant phonons. In our imaginary-time-dependent Hartree (ITDH) approximation, we consider a situation where the particle is injected into a specified coherent state of the phonon system, and we show how one can use the ITDH approximation to obtain the correlation function C(τ) for that initial state. The thermal average C(τ) is obtained, in principle, by integrating the result over all possible initial phonon coherent states, weighted by a thermal distribution. However, in the low-energy tail, one can obtain a good first approximation by considering only initial states near the one that maximizes the integrand. Our second approximation, the fixed-wave-function (FWF) approximation, assumes that the wave function of the injected particle evolves instantaneously to a wave function which then is independent of time, while the phonon system continues to evolve due to interaction with the particle. We discuss how to invert the Laplace transform and how to obtain A(,E) as well as D(E) from the imaginary-time analysis. The FWF approximation is used to calculate D(E) for a one-dimensional continuum model of a particle interacting with acoustic phonons, and effects due to the quantum motion of phonons are observed. In the classical phonon limit, where the nuclear mass is taken to infinity while the elastic constants and other parameters are held fixed, the dominant behaviors of both the ITDH and FWF approximations in the low-energy tail reduce to that found in the past for a particle in a random potential with a Gaussian statistical distribution. Low-energy tail of the spectral density for a particle interacting with a quantum phonon bath Bertrand I. Halperin August 1, 2023 ============================================================================================== § INTRODUCTION In 1957, Rashba and Davidov published an article in Ukrainian in the Ukrainian Physical Journal on optical absorption in a molecular crystal with a weak interaction between excitons and phonons <cit.>. Also in 1957, Rashba published a pair of articles in the Russian journal Optika i Spektroskopiya on the theory of electronic excitations interacting with lattice vibrations in a molecular crystal <cit.>. These articles considered cases of both light and heavy excitons, distinguishing excitons whose band width in the absence of lattice distortions is large or small on a scale depending on the strength of the particle-phonon interactions, with particular attention to the case of strong interactions. The three papers considered one-dimensional chains as well as three-dimensional crystals. An important question, addressed in Ref. <cit.>, was how do phonons affect the line shape for optical absorption associated with the creation of an exciton, at finite temperatures as well as at T=0? In simple cases, the optical absorption for a photon of frequency ω will be proportional to the momentum-resolved spectral density A(,E) at energy E=ħω for an exciton injected with momentum =0. A related quantity is the tunneling density of states D(E) for a particle such as an exciton or an electron injected at a single point, which is equal to the integral of A(,E) over all values of the momentum . Reference <cit.> and many subsequent works have examined the absorption line shape near the peak of the spectrum, where the absorption is relatively large, or in the high-energy tail, where the absorption falls off as an inverse power of the energy. In general, these regions of the spectrum can be understood by using perturbation theory or related diagrammatic methods to treat exciton-phonon interaction. In the present paper, however, we shall be concerned with the low-energy tail of the spectrum, where the density of states is very small and the absorption is very weak. This is a region where methods based on perturbation theory are generally inadequate and other approaches must be used. In many insulating materials, particularly alkali halides and other materials with a relatively large band gap and tightly bound excitons, the optical-absorption coefficient at photon energy E, for a range of temperatures T, has been fit by the empirical Urbach formula: α (E) = α_0 e^- σ (E_0 - E) / T, where α_0, σ, E_0 are material parameters, with, typically, σ≈ 1. (See, e.g., <cit.> and references therein. We use units where k_B = ħ = 1.) Attempts to explain (<ref>) have generally treated the phonon bath as giving rise to lattice distortions that can be treated as static on the time-scale of interest for the absorption process. In this case, the problem reduces to the model of a particle interacting with a potential that obeys Gaussian statistics with variance that will be proportional to T, for T ≫ω_ph / 2, where ω_ph is a characteristic phonon frequency <cit.>. The low-energy tail is then produced by processes in which the exciton is injected in a region where a thermal distortion has led to a reduction in the local energy gap. For T ≤ω_ph / 2, it is clear that quantum motion of the lattice should be taken into account, and one might expect significant corrections to the classical phonon picture. Indeed, observations do not necessarily conform to Eq. (<ref>) at low temperatures. In some cases, the law has been seen to apply with the replacement of T by an effective temperature T^* that is of the order of ω_ph / 2 at low temperatures. However, it is not clear whether the observed low-temperature behavior is controlled by phonons or whether the effects of impurities must be taken into account. Explanations of Urbach's rule have been only partially successful, even at high temperatures where the classical phonon approximation is presumably valid. Typically, theoretical analyses predict an absorption tail of the form α (E) ∼α_1 e^h(E) / T, where the function h is independent of T, and α_1 varies relatively slowly with E and T<cit.>. However, h is not a perfectly linear function of E in any obvious model, and there is no clear reason why dh/dE should turn out to be very close to 1. Nevertheless, numerical calculations have produced absorption curves that are in rather good agreements with (<ref>) in at least some instances <cit.>. In the present paper, we shall not attempt to provide further insight into the remarkable validity of (<ref>) in the high-temperature regime. Rather, we wish to explore methods for including the dynamic effects of lattice vibrations at temperatures where they might become important. We introduce two related approximations, which we expect will describe the most important dependences of the spectral density and tunneling density of states, (on a logarithmic scale), in the region of the low-energy tail, in a system without impurities. We note that impurities may well be important in many cases, and indeed there have been many theoretical investigations of low-energy behavior of D(E) and A(,E) for particles in a random potential due to impurities <cit.>. Nevertheless, we believe that it is important at least as a matter of principle to understand the effects of phonons in an ideal system. The effects of phonon interactions on the behavior of an isolated electron or exciton have been studied extensively over the years in the context of the polaron problem. These investigations have largely concerned such questions as the phenomenon of self-trapping in the presencece of strong particle-phonon interactions, and the binding energy and effective mass of the resulting polaron, as well as polaron mobility at low temperatures, rather than the spectral properties of interest to us here (cf. <cit.> and references therein). The concept of self-trapping will play a role in the discussion below, however. Our principal approach to the low-energy tail problem makes use of correlation functions in imaginary time, of the particle creation and annihilation operators, which are Laplace transforms of the density of states and spectral densities in which we are interested. After defining our model in the following section, we discuss a procedure for inverting the Laplace transform that is applicable to the low-energy behavior of the functions we wish to calculate. In Sec. <ref> we describe our principal approximation, which we denote as the imaginary-time-dependent Hartree (ITDH) approximation. In Sec. <ref> we describe a fixed-wave-function (FWF) approximation, which we expect to be less accurate than the ITDH approximation but easier to apply. In Sec. <ref>, we discuss the behavior of these approximations in the classical phonon limit, where we find that the two approximations are essentially equivalent and are closely related to previous studies of the low-energy tail in a Gaussian random potential. The FWF approximation is applied in Sec. <ref> to a one-dimensional model of a particle interacting with quantum-mechanical acoustic phonons in the continuum limit. While the bulk of our paper is focused on obtaining an optimum estimate of the tunneling density of states D(E), the analysis is easily extended to predict the momentum-dependent spectral density A(,E). This is discussed in Sec. <ref> . Although our paper is presented largely in the context of one-dimensional models, the methods should be useful, with some possible modifications, for three-dimensional problems. These are discussed in Sec. <ref>. In Appendix <ref>, we present a derivation of the imaginary-time-dependent Hartree equations of motion. Some higher-order corrections, which contribute pre-exponential corrections to the density of states in the classical phonon limit, are discussed in Appendix <ref>. Other appendixes discuss details of the density of states maximization related to a modified form of the FWF, statistics of the potential fluctuations in a thermal ensemble with and without quantum corrections, and an additional method for obtaining D(E) in the FWF approach. Our principal results are summarized in Sec. <ref>. § MODEL We consider here a one-dimensional model of particles and phonons on a lattice. We assume lattice constant a, with periodic boundary conditions and N ≫ 1 sites, giving total length L=Na. We assume a Hamiltonian of the general form H = H_e + H_p + H_ep , H_e = ∑_k ϵ_k c^†_k c_k , H_p = ∑_k ω_k a^†_k a_k , H_ep = a ∑_k ∑_x ψ^†_x ψ_x ( λ_k a_k^† + λ_-k^* a_-k ) e^-ikx , where ϵ_k and ω_k are dispersion relations for particles and phonons, respectively, c_k (a_k) and c_k^† (a_k^†) are annihilation and creation operators for a particle (a phonon) with momentum k, while λ_k is the coupling strength for interaction between a particle and a phonon with momentum k, and ψ^†_x is a creation operator for a particle on a lattice site at position x = n a, with n an integer: ψ^†_x = L^-1/2∑_k e^-ikx c^†_k . We are using a normalization such that ψ_xψ^†_x'_∓≡ψ_xψ^†_x'∓ψ^†_x'ψ_x = a^-1δ _x x', with - for a boson and + for a fermion, which will facilitate passing to the continuum limit, replacing a ∑_x by ∫ dx. We shall assume the Hamiltonian is time-reversal invariant, so λ_-k=λ_k^*. It will be helpful to rewrite the coupling constants λ_k as λ_k = (2N M ω_k )^-1/2γ_k , where M is the nuclear mass. Then γ_k will remain constant if we vary M while keeping fixed the elastic constants M ω_k^2 and keeping fixed the deformation potential felt by a particle for a given displacement of the atoms. The classical phonon model can then be described by taking M to infinity and ω_k → 0, keeping γ_k fixed. The coefficients λ_k have dimensions of energy, while γ_k has dimensions of energy per unit length. We wish to calculate the low-energy tail of the particle density of states D(E). Specifically, we consider an initial state described by a thermal distribution of phonons, with no particle present, described by the initial density matrix w = Z^-1 e^-H_p / T , where Z = tr e^-H_p / T, and we wish to calculate the distribution of possible energy changes produced by the added particle. If we work in the basis of exact eigenstates of H, then we may write D(E) = Z^-1∑_f ∑_i | fψ_x_0 ^†i |^2 δ (E - E_f + E_i) e^-E_i / T , where x_0 is an arbitrary point on the lattice, and the sum is over eigenstates of the Hamiltonian with no particle present for the state i with energy E_i and one particle present for the state j with energy E_j. The result will be independent of the choice of x_0, as the model is translationally invariant. Alternatively we may calculate the correlation function C̃ (t) = tr (e^i H tψ_x_0 e^- i H tψ^†_x_0 w) = ∫_- ∞ ^∞ e^-i t E D(E) dE , or its imaginary-time version C(τ) ≡C̃ (-i τ) = tr (e^ H τψ_x_0 e^- H τψ^†_x_0 w) = ∫_- ∞ ^∞ e^- τ E D(E) dE. In principle, if C(τ) is accurately known, it can be analytically continued to obtain C̃, which can then be Fourier transformed to obtain D(E). However, an approximate solution at real times would not be useful for obtaining the low-energy tail because the Fourier transform would be very unstable to small errors due to the oscillating nature of the integrand. In the following section, we argue that knowing C(τ) we can in fact directly obtain a good estimate of the low-energy tail of D(E) under appropriate circumstances. The momentum-dependent spectral density A(k,E) is defined by replacing the operator ψ^†_x_0 in Eq. (<ref>) with the operator c^†_k: A(k,E)=Z^-1∑_i,ffc_k^†i^2δ(E-E_f+E_i)e^-E_i/T. Similarly we may define a momentum-dependent correlation function C(k,τ) by replacing ψ_x_0 and ψ^†_x_0 in (<ref>) by c_k and c^†_k. The function C(k,τ) will be the Laplace transform of A(k,E): C(k,τ) = tr (e^ H τ c_k e^- H τ c_k^† w) = ∫_- ∞ ^∞ e^- τ E A(k,E) dE. § INVERTING THE LAPLACE TRANSFORM Let us write C(τ) and D(E) in the form C(τ) = e^g(τ) , D(E) = E_0^-1 e ^ f(E) , where E_0 is a characteristic energy, such as the width of the peak of the spectral density A(k,E) at k=0. The low-energy tail is a region of energies less than some energy E_1 that is of order E_0 below the nominal bottom of the band where we have f(E) < -1 [such that D(E) is small enough], and f'(E) > E_0^-1 [such that D(E) decreases rapidly with decreasing E]. We shall make the additional assumption here that f”(E) < 0 for all E< E_1, which, following a Halperin–Lax-type analysis, should be a good assumption for d=1 and marginally valid for d=2, but generally false for d=3. (Possible modifications to handle the case of d=3 will be discussed in Sec. <ref>.) Under these circumstances, Laplace's method can be used to evaluate the integral (<ref>); the integral should be dominated by the region near the maximum of the integrand, where E takes on the value E_τ, for the given τ, such that f ' (E_τ) = τ . In the neighborhood of E_τ, we can expand f as f (E_τ+δ E) = f(E_τ) + τδ E + (δ E)^2 (d τ / d E_τ) /2 + ... , with dτ / d E_τ = f” (E_τ) < 0. If we ignore terms higher-order in δ E, we may evaluate the integral in (<ref>) with the result C(τ) ≈ D(E_τ ) e^- τ E_τ| 2 πd E_τ/d τ| ^ 1/2 . Defining τ_E by the equation τ_E= f ' (E), we may invert the above equation to give D(E) ≈ ( 2 π )^-1/2 C(τ_E) e^τ_E E | d τ_E/d E| ^1/2 . In the low-energy tail, we expect that the last factor in the above equation should have a weaker dependence on E than the earlier factors, because τ depends linearly on f', whereas the other factors depend exponentially on f. Ignoring the last factor, we see that if the function C(τ) is known, then τ_E can be obtained from the requirement that at τ = τ_E, E = - d ln C/d τ . Also, d τ_E/dE = - ( d^2 ln C /d τ^2) ^ -1 . If the zero of energy is chosen at the bottom of the unperturbed electron energy band, then E will be negative in the low-energy tail, and dC/d τ will be positive. Moreover, large negative values of E will correspond to large positive values of τ. The above arguments can be made more precise if one can define a small parameter ζ, such that in the limit ζ→ 0, with E held fixed, ζ ln D(E) →f̃(E), where f̃(E) is independent of ζ. Laplace's method becomes exact in the limit ζ→ 0. For the systems we consider here, we find that for E lower than the minimum of the unperturbed spectrum ϵ_k, the function D(E) can be written a form similar to (<ref>) in the limit where T and M^-1 are small, while material parameters such as γ_k, ϵ_k, and M ω_k^2 are held fixed. Note that the phonon frequencies will be proportional to M^-1/2. In the case where T is comparable to or larger than the typical frequency ω_ph of the phonons most important for states in the low-energy tail, the parameter ζ will scale proportional to γ̃^2 T, where γ̃ is a typical value of the coupling constant γ_k. In the limit where T→ 0, with ω_ph small but finite, we find that ζ∝γ̃^2 ω_ph, provided that E is larger than E_min, the ground state energy for a single particle coupled to the phonons. Quantum corrections will be most important when T≲ω_ph. Of course, the actual value of T or ω_ph necessary to be in the low-energy tail will depend on details of the system, including the energy in question. In experiments, one may enter the low-energy tail region by varying the measurement energy rather than the temperature. The approximations employed in this paper are intended to give a good approximation to the function f̃(E), which means that they should give a good approximation to the leading exponential behavior of D(E) and A(,E) in this limit. However, they are not expected to give correctly the pre-exponential factors. An important caveat is that in some situations, we may encounter examples where f”(E) > 0 for an energy range of interest, E_b < E < E_a. In such cases the Laplace transform cannot be inverted in this simple way. However, we should get a warning of this problem from studying C(τ) because as τ is varied the maximum of the integrand in (<ref>) will jump rapidly between an energy E_c > E_a to an energy E_d < E_b, so that Eq. (<ref>) will move rapidly through the energy range of interest. We shall return to this issue in Sec. <ref>, but at least in the one-dimensional models we focus on here, this problem will not arise. § IMAGINARY-TIME-DEPENDENT HARTREE (ITDH) APPROXIMATION §.§ Coherent state representation Let {α_k}=|α⟩ denote a coherent phonon state with a_k = α_k, and the value of α_k is specified for every wave vector k. The initial density matrix w may be written as w = ∫ d α w_α αα , where the integral is over the real and imaginary parts of the variables in α, i.e., dα=∏_k[d(α_k) d(α_k)], w_α = Z^-1_cohexp [- ∑_k ( | α_k |^2 / n_k) ] , n_k ≡1/ e^ω_k / T -1 , and Z_coh = ∏_k (π n_k). We may now write (<ref>) as C(τ)=∫ dα w_αC_α(τ) where C_α(τ)=⟨α|e^Hτψ_x_0e^-Hτψ_x_0^†|α⟩. §.§ Factorization approximation We next make the factorization approximation e^- H τψ^ †_x_0|α⟩≈ a^-1/2R(τ) Φ_τ^†|β(τ)⟩≡Ψ(τ), where |{β_k(τ)}⟩=|β(τ)⟩ is a (normalized) coherent phonon state, Φ_τ^† creates a particle in a state with a normalized wave function ϕ (x,τ): Φ_τ ^† = a ∑_x ϕ (x,τ) ψ^†_x , and R( τ) is a renormalization factor, chosen as a positive real number depending on τ, made necessary by the imaginary time propagation. [Our normalization convention is such that a ∑_x |ϕ(x)|^2≡⟨ϕ|ϕ⟩ =1]. At time τ=0, we set R=1, ϕ (x) = a^-1/2δ_x x_0, and β_k = α_k. The parameters should then evolve in time according to the equations of motion, which are derived in Appendix <ref>: ∂ϕ / ∂τ = - [H_e + V (x,τ) - E_ϕ(τ) -iQ(τ)] ϕ V(x,τ) = ∑_k ( λ_k β^*_k + λ_-k^*β_-k) e^-ikx , E_ϕ(τ) = ϕ(τ)H_e + V(x,τ) ϕ(τ) , Q(τ) = [ ∑_k β^*_k(τ) λ_k |ϕ^2|_k ] where | ϕ^2 | _k ≡ a ∑_x | ϕ (x, τ) |^2 e ^-ikx , and d β_k / dτ = - ω_ k β_k - λ_k |ϕ^2| _ k , dR / dτ = - R ( E_ ϕ + ∑ _k ω_k |β_k|^2 ) . Without the particle, one obtains e^-Hτ|α⟩=R_0(τ)|β_0(τ)⟩, where parameters evolve in time according to the equations of motion dβ_0k/dτ =-ω_kβ_0k, dR_0/dτ =-R_0∑_kω_kβ_0k^2, whose solutions are β_0k(τ) =α_k e^-ω_kτ R_0(τ) = e^-∑_kα_k^2(1-e^-2ω_k τ)/2. Thus, we find C_α (τ) ≈ a^-1/2R(τ)/ R _0(τ)ϕ(x_0, τ) ∏_k β_0k(τ)β_k(τ) , where for complex numbers z_1 and z_2, the inner product of the corresponding coherent states is z_1z_2 = exp{z_1^*z_2-|z_1|^2/2 - |z_2| ^2 /2 } . Now, in principle, we should solve equations of motion for all possible choices of the initial variables α and carry out the integration (<ref>). However, in order to get a binding energy in the low-energy tail, one needs an initial phonon configuration with a large distortion, costing an energy large compared to T. The density of states will then be dominated by the configuration that achieves the binding with minimum energy cost. [E.g. for the one-dimensional continuum model considered in <cit.>, in the classical phonon limit, the optimum distortion is proportional to ^2κ(x-x_0), with κ=(2mE)^1/2.] Relatively small departures that still preserve the binding energy will cost energies larger than T, so their probability will diminish rapidly. Correspondingly, we expect the integral (<ref>) for the Laplace transform will be dominated by a region near the optimal phonon configuration where the product w_αC_α(τ) is a maximum. Thus, using Laplace's method, we may make the further approximation C(τ) ≈ S(τ) C_max (τ), where C_max (τ) = max_ α[ Z_coh w_αC_α (τ)] , and the prefactor S(τ) should be evaluated using a perturbative expansion about the maximizing phonon configuration. Essentially, the factor S represents something like the difference in the entropies of initial and final states. (We assume that there is a single α that maximizes w_αC_α(τ) and the corresponding C_α(τ) is real. See Appendix <ref>.) In practice, it may be a good first approximation to set S=1. Finally, we should use the procedures of Sec. <ref> to invert the Laplace transform and calculate D(E). § FIXED-WAVE-FUNCTION (FWF) APPROXIMATION We may simplify the analysis by ignoring the equation of motion (<ref>) for the wave function ϕ (x,τ) and simply take ϕ to be a time-independent trial wave function ϕ_tr (x), which will depend on the energy E of interest, and which we will eventually choose to optimize our estimate of D(E). We retain Eqs. (<ref>)–(<ref>) for the time evolution of β_k and R(τ); however, we employ the initial condition R(τ = 0) = ϕ_tr (x_0) rather than R(0) = 1. For a given choice of ϕ_tr, the problem now reduces to the Franck-Condon problem of a localized electronic excitation linearly coupled to a phonon bath, which was studied, for example, by Lax and Hopfield <cit.>. In particular, using the solution of Lax with t= -iτ, we find C_ϕ_tr (τ) ≈ |ϕ_tr (x_0 )|^2 e^ - τ E^ϕ_e e^F(τ) , E^ϕ_e = ϕ_tr H_eϕ_tr , F(τ) = ∑_k |C_k|^2 /ω_k^2 [ (n_k +1) e^-ω_k τ + n_k e^ω_k τ + ω _k τ -(2 n_k +1) ] , C_k = λ_k |ϕ^2 |_k . Now, we approximate C(τ) by choosing the trial function ϕ_tr so as to maximize the value of C_ϕ_tr (τ), as given by (<ref>) and (<ref>), for the given τ: C(τ)≈max_ϕ_tr[C_ϕ_tr(τ)]. After repeating this for a suitable range of values of τ, one can then invert the Laplace transform using the methods we have described. Presumably the FWF estimate will be less accurate than the ITDH approximation, defined in the previous section. However, it should be easier to compute, and the two approximations may not differ much in practice. § CLASSICAL PHONON LIMIT As mentioned above, in the limit where the masses of the nuclei are taken to infinity, while the elastic constants and the temperature are held fixed, the problem we are considering reduces to a calculation of the density of states of a particle moving in a static random potential V(x)= ∑_k ( λ_k α^*_k + λ_-k^*α_-k) e^-ikx. At temperature T, the potential obeys Gaussian statistics, with V(x)_T = 0 and a correlation function V(x) V(x') _T = W(x-x'), where ·_T is the thermal average and [c.f. Eq. (<ref>)] W(x-x') =T ∫_-π/a^π/adk/2πγ_k^2/ρω_k^2 e^-ik(x-x') . If the function V(x) is specified, the combinations α_k + α^*_-k are thereby determined, but no information is gained about the quantities η_k = α_k - α^*_-k. If we integrate w_α over the variables η_k, we obtain a probability distribution for the function V(x) (see Appendix <ref>): w_cl = Z_cl^-1exp[ - a^2/2∑_x x' V(x) G(x-x') V(x') ] , where Z_cl=∫𝒟V exp[ - a^2/2∑_x x' V(x) G(x-x') V(x')] is a normalization constant and the integral is over all possible configurations of V(x), and G is the matrix inverse of W: a∑_x” G(x-x”) W(x”-x') = a^-1δ_xx' . §.§.§ ITDH approximation We now apply these results to the ITDH approximation. The particle wave function at imaginary-time τ in a static potential configuration V (x) can be expressed in terms of the energies E_n and eigenstates ϕ_n(x) of a particle moving in this potential as ϕ(x, τ) = a^1/2R_0(τ)/R(τ)∑_n ϕ_n(x) ϕ_n^*(x_0) e^-E_n τ , where the eigenstates are normalized such that a ∑_x ϕ^*_n (x) ϕ_n'(x) = δ_n n', ∑_n ϕ^*_n(x)ϕ_n(x') = a^-1δ_x x' . Then, at large τ, we obtain from (<ref>): C_α (τ) ≈ |ϕ_B(x_0)|^2 e^- E^0_V τ , where E^0_V is the energy of the lowest energy eigenstate in the potential V that has significant weight at the injection point x_0, and ϕ_B is the corresponding wave function. Here we have used the fact that in the classical phonon limit, β_k = β_0k = α_k. Also, since there is an energy gap separating the lowest state in a potential well from all higher energy states, we should be justified in ignoring the contributions of those states at large τ. Typically, one finds that for the optimum well shape, the minimum excitation energy for a particle in the well will be of the order of E_V^0. Thus the condition for (<ref>) to hold will be τ >1/ E_V^0. We shall also require that the temperature satisfies T< E_V^0. According to <cit.>, in the classical phonon limit, one finds ln D(E) ∼h̃( E) / γ^2 T, where γ measures the strength of the electron-phonon coupling, and h̃<0 is a function that depends on the energy but is independent of γ and T. According to (<ref>), the interesting values of τ will be related to the energy E by τ = h̃’(E) / γ^2 T. Thus τ will satisfy the inequality τ >1/ E_V^0 if T is sufficiently small, and it will satisfy it for all temperatures of interest (T<E_V^0) if γ^2 <h̃’(E ). Now, in principle, we should compute C(τ) from C(τ) →∫𝒟V w_cl |ϕ_B (x_0)|^2 e^-E^0_V τ , where the integral is over all possible configurations of V(x). However, as explained previously, in the low energy tail, the integral will be dominated by configurations close to the one that maximizes the integrand. Thus, ignoring the correction factor S(τ) in Eq. (<ref>), we may approximate (<ref>) as C(τ) ≈max[ Z_cl w_cl |ϕ (x_0)|^2 e^-E_ϕτ], where E_ϕ is given by (<ref>) and the maximum is taken over all choices of ϕ as well as of V(x). If we ignore, for the moment, the factor |ϕ(x_0) |^2, then setting δ C/ δϕ =0 leads to the results that ϕ is indeed an energy eigenstate in the potential V, which we identify with ϕ_B and E_ϕ=E^0_V. Now setting δ C /δ V = 0 gives the result V(x) = V_opt(x)= - τ U(x) , U(x) ≡ a ∑_x' W(x-x') |ϕ_B (x')|^2 , where we made use of Eq. (<ref>). Although Eqs. (<ref>)-(<ref>) are invariant under translation of the center position of ϕ and do not specify it, maximizing the prefactor in (<ref>) dictates that we choose ϕ(x) to be centered at x_0. However, other than this, taking into account contributions to the variational derivative from the factor |ϕ(x_0) |^2 would change these results by an amount that is small and can be neglected in the low-energy tail. For an arbitrary wave function ϕ centered at x _0 and an arbitrary potential V we may define a smoothed potential by V_s(x_0) = a ∑_xϕ (x)^2 V(x) , Let us define V_s^opt as the value of V_s(x_0) when V = V_opt and ϕ = ϕ_B. Let θ = ϕ_BH_eϕ_B be the particle kinetic energy in the state ϕ_B. Then at the optimum point, we have E_ϕ = θ + V_s^opt. Using (<ref>), (<ref>) and (<ref>), we then obtain C(τ) ≈ |ϕ_B (x_0)|^2 e^- τ (θ + V_s^opt) e^ - τ^2 σ_0^2 /2 , where σ_0^2 = a^2 ∑_x x'ϕ_B (x)^2 W(x-x') ϕ_B (x')^2 . Also, we have V_s^ opt= - τσ_0^2. §.§.§ FWF approximatiion We now turn to the FWF approximation. In the classical phonon limit, the exponent F(τ) in Eq. (<ref>) may be written as F(τ) = τ^2 σ_ϕ_tr^2 / 2 with σ_ϕ_tr^2 = 2T∑_k |C_k |^2/ω_k = a^2 ∑_x x'ϕ_tr (x)^2 W(x-x') ϕ_tr (x')^2, where we made use of Eq. (<ref>). Also, we may identify E_e^ϕ with θ. Then, Eq. (<ref>) becomes C_ϕ_tr (τ) ≈ |ϕ_tr (x_0)|^2 e^- τθ e^τ^2 σ_ϕ_tr^2 /2 . In order to find the trial wave function which maximizes this function, we ignore the pre-exponential factor and set equal to zero the variational derivative of the exponent with respect to ϕ_tr(x). This gives the equation H_e ϕ_tr(x) + V_ϕ (x) ϕ_tr(x) = μϕ_tr(x) , V_ϕ (x) ≡ - τ a ∑_x' W(x-x') |ϕ_tr(x)|^2 , where μ is a Lagrange multiplier necessary to enforce the constraint that ϕ_tr is properly normalized. We see that these are the same equations as the ones we used to determine the optimum potential V_opt and the corresponding wave function ϕ in the ITDH. Moreover, when we identify ϕ_tr with the wave function ϕ obtained in the ITDH approach, we find that σ_ϕ_tr = σ_0 and the value of C(τ) obtained from (<ref>) coincides with (<ref>). Thus, the FWF and ITDH approximations give equivalent predictions in the regime under discussion. We shall see that these results also agree, as far as the exponential factors are concerned, with the predictions of <cit.> for the low-energy tail of the density of states in a Gaussian random potential. § CONTINUUM WITH ACOUSTIC PHONONS For a system of particles interacting with acoustic phonons, a continuum model can be considered when the relevant particle wavelength is far larger than the lattice spacing a. The Hamiltonian for such a system is H=∫ dx (ψ^†(x)(-1/2md^2/dx^2)ψ(x)+γ∂ u/∂ xψ^†(x)ψ(x). . +1/2K(∂ u/∂ x)^2+1/2ρΠ(x)^2), where ψ(x) and ψ^†(x) are particle annihilation and creation operators at a position x, with ψ(x)ψ^† (x')_∓= δ (x-x') (- for bosons, + for fermions), m is the particle mass, γ is the particle-phonon coupling strength, u(x) is the nuclear displacement field, K is a bulk modulus, ρ is nuclear mass density, and Π is nuclear momentum density (momentum per unit length). The strain ε(x)≡∂ u(x)/∂ x is assumed to be small, ε(x)≪1, and the commutation relation for the nuclear displacement and momentum density is u(x)Π(x')=iδ(x-x'). The phonon dispersion relation for the one-dimensional harmonic chain is given in the continuum limit by ω_k=2/a(K/ρ)^1/2sin|k|a/2 → |k| (K/ρ)^1/2 . Note the Hamiltonian in Eq. (<ref>) is the special case of the more general Hamiltonian in Eq. (<ref>) with ω_k given by Eq. (<ref>), ϵ(k)=k^2/2m, and γ_k=γk. §.§ Classical self-trapping ground state In the classical phonon limit ρ→∞ while keeping the other parameters fixed, the Hamiltonian (<ref>) becomes H=∫ dx (ψ^†(x)(-1/2md^2ψ(x)/dx^2)+γε(x)ψ^†(x)ψ(x). .+K/2ε(x)^2), where ε (x) may be considered as a fixed classical function of position. We wish to find a strain configuration ε(x) and a single-particle wave function ϕ(x) which gives the lowest possible expectation value of the Hamiltonian in the single-particle subspace. First, the minimum with respect to ε(x) is found by taking the functional derivative δ H/δε(x) = γ |ϕ(x)|^2 +Kε(x) = 0. Thus, we require ε(x)=-γϕ(x)^2/K. Note that larger local particle density ϕ(x)^2 induces more strain ε(x). Substituting this back into the Hamiltonian (<ref>) in the single particle subspace, one obtains a lattice-relaxed total energy H_LR = ∫ dx (1/2mdϕ(x)/dx^2 -γ^2ϕ^4/2K), where integration by parts was used for the kinetic energy term. Next, we minimize H_LR with respect to ϕ^*(x) with the constraint ∫ dxϕ^2=1 using Lagrange multiplier μ: δ/δϕ^*(x)(H_LR-μ∫ dxϕ(x)^2) = -1/2md^2ϕ(x)/dx^2 -γ^2ϕ(x)^2ϕ(x)/K -μϕ(x) = 0. This is a nonlinear Schrödinger equation for a particle in the effective potential -γ^2ϕ^2/K with particle energy eigenvalue μ. Since any position-dependent phase factor of ϕ(x) leads to higher kinetic energy, the ground state should have a global (position-independent) phase factor. Thus, the ground state can be chosen to be real and Eq. (<ref>) becomes -1/2d^2ϕ(x)/dx^2 -1/2νϕ(x)^3 = -κ_H^2/2ϕ(x), where ν=2mγ^2/K and κ_H=(-2mμ)^1/2. Since Eq. (<ref>) is translationally invariant, it has degenerate solutions, which have the form (cf. Ref. <cit.> Appendix B) ϕ(x)=(κ_H/2)^1/2(κ_H (x-x_0)) with arbitrary center position x_0. The wavefunction in Eq. (<ref>) is normalized and its normalization condition gives κ_H=ν/4=mγ^2/2K. The particle energy of the ground state is μ=-κ_H^2/2m=-mγ^4/8K^2, and the total energy of the ground state (minimum energy) is obtained by substituting (<ref>) into (<ref>): E_min=-κ_H^2/6m=-mγ^4/24K^2. Note that the difference between the total and particle energies, E_min-μ=κ_H^2/3m=-mγ^4/12K^2, is the elastic energy. We remark that if ρ is not infinite, the quantum ground state of the system with one particle present will not be localized but will be a polaron with total momentum k=0. This state may be considered as a linear superposition of the self-trapped particle with arbitrary center positions x_0. In the limit ρ→∞, polaron energy approaches E_min(<ref>) and the energy becomes independent of the polaron momentum as the polaron mass becomes infinite. In summary, in the classical phonon limit ρ→∞, the ground state for the system of a single particle interacting with acoustic phonons in one dimension is described by a self-trapped particle state (<ref>) with lattice relaxation (<ref>). The self-trapping energy |E_min| is non-zero for any non-zero γ, although it can be very small when γ is small. §.§ Application of the fixed-wave-function approximation Now the FWF approximation introduced in Sec. <ref> can be implemented. We choose a trial wave function of the form ϕ_κ(x)=(κ/2)^1/2(κ x), with a single variational parameter κ. This seems like a good choice, since it is the correct form in the classical phonon limit ρ→∞, where it coincides with the results of <cit.> for the Gaussian white noise potential, and it happens to coincide with the form of the self-trapped ground state at T=0. We will eventually choose κ to maximize the correlation function C as in Eq. (<ref>) [or to maximize the density of states D(E) as in Eq. (<ref>) in the modified FWF approximation, c.f., Sec. <ref>]. In the continuum, Eq. (<ref>) becomes F(τ) = ∫_-π/a^π/adk/2πL C_k^2/ω_k^2 [(n_k +1)e^-ω_kτ+n_k e^ω_kτ +ω_kτ-(2n_k+1)], where C_k is defined in Eq. (<ref>), with λ_k =√(1/2ρ L ω_k)γk, ϕ^2_k = ∫ dx e^-ikxϕ_κ(x)^2 = kπ/2κ(kπ/2κ). Note that ϕ^2_k is a smooth function, with ϕ^2_k→1 for k≪κ and ϕ^2_k→0 for k≫κ, and ϕ^2_k is significant only for k≲κ. Furthermore, ϕ_κ(x_0) =(κ/2)^1/2 for x_0=0, E_e^ϕ_κ =ϕ_κH_eϕ_κ = κ^2/6m. Thus, using the FWF approximation, C_ϕ_κ(τ) [Eq. (<ref>)] is essentially obtained by performing the numerical integration in Eq. (<ref>). Then, C_ϕ_κ(τ) is maximized with respect to κ [Eq. (<ref>)] and the density of states D(E) is obtained from C(τ) by inverting the Laplace transform (<ref>) as described in Sec. <ref>. We first consider two special limits, the classical phonon limit and the quantum zero temperature, in order to obtain further insights. §.§.§ Classical phonon limit The classical phonon limit is achieved by taking ρ→∞ while keeping the temperature T fixed. This reduces to the case of a particle in a static Gaussian white noise potential, which was considered in Halperin–Lax analysis <cit.> where the optimal κ maximizing the density of states as in Eq. (<ref>) was given by κ_E=(-2mE)^1/2. We would like to compare this Halperin–Lax result to the FWF approximation. In the classical phonon limit, ω_k→0 [c.f. Eq. (<ref>)] and ω_kτ≪1 which simplifies Eq. (<ref>) to F(τ) = ∫_-π/a^π/adk/2π/LC_k^2 (n_k+1/2)τ^2. Since ω_k/T≪1, the above expression is further reduced to F(τ) ≈γ^2 T/2ρ∫_-π/a^π/adk/2πk^2/ω_k^2ϕ^2_k^2 τ^2. The integrand of Eq. (<ref>) is significant for k≲κ, due to ϕ^2_k, and in this interval, ω_k can be approximated as a linear dispersion ω_k≈√(K/ρ)k, since κ a≪1 in the continuum limit. In addition, for κ a≪1, the integral can be evaluated, since ∫_-π/a^π/adk/2πϕ^2_k^2 ≈∫_-∞^∞dk/2πϕ^2_k^2 = κ/3. Thus, one obtains F(τ) ≈γ^2 T/2Kκ/3τ^2 ≡σ_ϕ_κ^2τ^2/2 . Thus, from Eq. (<ref>), one gets C_ϕ_κ(τ)= ϕ_κ(x_0)^2e^-τ E_e^ϕ_κ+σ_ϕ_κ^2τ^2/2. Now the correlation function must be maximized with respect to the variational parameter κ. The value of κ that maximizes the correlation function is found by solving the eqution dln C_ϕ_κ(τ)/dκ = d/dκ[ lnκ -τ E_e^ϕ_κ+σ_ϕ_κ^2τ^2/2] =0, which, for κ>0, gives κ_τ=mγ^2 Tτ/4K+√((mγ^2 Tτ/4K)^2+3m/τ). The asymptotic form of D(E) can now be obtained using the inversion formulas described in Sec. <ref>. Using the optimal κ, given by (<ref>), Eq. (<ref>) gives E = -κ^2/6m-√((κ^2/3m)^2-2γ^2 T/3Kκ), For low energy E≪0, one obtains the Halperin–Lax result κ=(-2mE)^1/2 [c.f. Eq. (<ref>)]. The resulting approximation for D(E) is D_FWF(E) ≈(κ_E/2πξ)^1/2e^-4κ_E^3/3m^2ξ for E≪0, where ξ is introduced to make connection to the Gaussian white noise model [c.f. Eq. (<ref>)] ξ/2=γ^2T/K. [See, also, the discussion in Appendix <ref> of the modified FWF approximation, which maximizes D(E) rather than the correlation function C(τ).] Our approximate density of states from classical acoustic phonons D_FWF(E) can be compared to the exact density of states for the Gaussian white noise potential <cit.> D_exact(E) =(m^2ξ)^-1/3N'(E(m^2ξ)^-2/3), where N, which may be expressed in terms of Airy functions as N(ϵ) =π^-2([Ai(-2ϵ)]^2+[Bi(-2ϵ)]^2)^-1, is the cumulative density of states as a function of unitless energy ϵ=Em^-4/3ξ^-2/3. This gives rise to the exact asymptotic form, D_As(E) ≈4κ_E^2/π mξe^-4κ_E^3/3m^2ξ for E≪0. Note Eqs. (<ref>) and (<ref>) have the same exponential factor, but different prefactors. The difference in the prefactors can be reduced by considering higher-order corrections, as discussed in Appendix <ref>. The correction factor S_1(E) is given in Eq. (<ref>), which gives corrected density of states to FWF approximation: D_FWF(E) S_1(E) ≈(2/15)^1/24κ_E^2/π mξ e^-4κ_E^3/3m^2ξ for E≪0. The comparison of different approximations to the density of states in the Gaussian white noise potential is given in Fig. <ref> where we use K=1, m=1, T=0.1, and γ=1. §.§.§ Quantum zero temperature Zero temperature can be considered instead of the classical phonon limit. At zero temperature T=0, all Bose occupation numbers n_k are set to zero. Then, Eq. (<ref>) becomes F_T=0(τ) = ∫_-π/a^π/adk/2πL C_k^2/ω_k^2 [e^-ω_kτ+ω_kτ-1]. For ω_kτ≪1, F_T=0(τ) is quadratic in τ F_T=0(τ) ≈∫_-π/a^π/adk/4π L C_k^2 τ^2 ≈3γ^2ζ(3)κ^2/2π^3 (ρ K)^1/2 τ^2, where similar approximations were used as in Eq. (<ref>), and ζ is the Riemann zeta function. For ω_kτ≫1, d F_T=0(τ)/d τ ≈∫_-π/a^π/adk/2π LC_k^2/ω_k≈γ^2κ/6K, where, again, similar approximations were used as in Eq. (<ref>). According to Eq. (<ref>), this implies that for a fixed value of κ, the computed density of states vanishes below a minimum energy E_min, κ=E_e^ϕ_κ-d F_T=0(τ)/dτ = κ^2/6m-γ^2κ/6K, which implies that D(E)=0 for E < E_min= min _κ (E_min, κ) = - m γ^4/24 K ^2 . This agrees with the result (<ref>) for the ground-state energy of the self-trapped particle in the limit ρ→∞. We note that in Eq. (<ref>), the first term is the kinetic energy of the FWF [c.f. Eq. (<ref>)], and the second term is the particle-phonon interaction and the elastic energy of the FWF after lattice relaxation [c.f. the second term of Eq. (<ref>) for ϕ=ϕ_κ]. §.§ Numerical results We now turn to situations where neither T nor ρ^-1 is zero. Here we use numerical methods to compute F(τ) for various choices of the parameter κ and to find the value of κ that maximizes the resulting estimate of C(τ). Finally, we use the method of Sec. <ref> to obtain the density of states D(E). The ratio of the thermal energy to the energy of a phonon α=T/ω_κ_H is a useful parameter for understanding the importance of quantum phonon effects. The limit α≫1 will approach classical phonon limit, and small α will show the effects of quantum motion of the nuclei. To study how the density of states depends on this parameter α, we can either 1) vary ρ while fixing the temperature T, or 2) vary T while fixing ρ. We use a=0.1, K=1, and m=1, and we vary ρ, T, and γ for the following calculations. Equivalently, we are measuring E in units of (K^2/m)^1/3, and D(E) in units of (m^2/K)^1/3. §.§.§ Lg The dependence of the density of states on α by changing ρ is shown in Figs. <ref> and <ref> for γ=1 and 0.5, respectively. For smaller α, the density of states is larger, meaning the nuclear quantum effect increases the density of states. The limit α→∞, achieved by ρ→∞, corresponds to the classical phonon limit considered by Halperin and Lax <cit.>. It is also seen that the different γ values change the overall energy scale. §.§.§ Lg The dependence of the density of states on α by changing T is shown in Figs. <ref> and <ref> for γ=1 and 0.5, respectively. For larger values of α, the density of states increases, which implies that a higher temperature leads to a larger density of states. Note at zero temperature α=0, there exists energy minimum given by Eq. (<ref>) below which the density of states completely vanish. Comparing with Figs. <ref> and <ref>, we see that the curves for α=0.25 and α=0.5 would be almost indistinguishable from the classical phonon limit ρ→∞ at the given temperatures, which was discussed in <cit.>. If we restore the parameters m and K, the density of states can be written in the form D(E,T,ρ,γ) = ( m/E_γ)^1/2D̃ , where E_γ≡γ^4 m K^-2 = 24 |E_min|, and D̃ depends only on the dimensionless variables E/E_γ , T / E_γ and ρ E_γ / mK. This means that plots of D(E,T,ρ,γ) for γ =0.5 and γ =1 would coincide if we change variables accordingly. I.e., D(E,T,ρ,γ=0.5)=4D(E',T',ρ',γ=1) with E'=16E,T'=16T,ρ'=ρ/16. The value of D(E,T,ρ,γ) is independent of the remaining dimensionless quantity γ̃≡γ^3 m K^-2 because if we rescale the field u(x) by a factor of λ, the coefficients in the Hamiltonian (<ref>) will be modified and γ̃ will be changed by a factor of λ, but the energy eigenvalues are unchanged. However, du/dx would no longer be the strain. The Hamiltonian (<ref>) will actually become unphysical for sufficiently large values of γ, because the resulting strains can be larger than 1. §.§.§ Lg Fig. <ref> shows the relation between energy E and optimal κ that maximizes correlation function C_ϕ_tr(τ) for different α values, achieved by varying T for fixed ρ=e4, and γ=1. We find that the α=0 (zero temperature) curve follows κ=(-6mE)^1/2, for energies above the minimum energy E_min, while the curves for α=0.25 and 0.5 are close to the classical phonon limit result, κ=(-2mE)^1/2. The curves for α = 0.05 and α = 0.1 exhibit a more complicated behavior. We can understand the result κ = (-6mE)^1/2 for E_min<E<0 for α=0, and κ = (-2mE)^1/2 for classical phonon limit (large α) as follows. Ignoring ϕ_κ(x_0)^2 factor in Eq. (<ref>), one obtains C_ϕ_κ(τ)≈ e^-τ E_e^ϕ_κ+F(τ), where, according to Eq. (<ref>), at T=0, for τ in the range of interest, F has the form F(τ)≈ Aκ^p τ^2. with A a constant and p=2. In the classical phonon limit, F has a similar form but with p=1. [C.f. Eq. (<ref>)]. In either case, the optimal κ that maximizes C can be obtained from dln C_ϕ_κ(τ)/dκ≈ -τκ/3m +p Aκ^p-1τ^2 =0. For this optimal κ, Eq. (<ref>) gives the relation between E and κ: E = κ^2/6m-2Aκ^pτ = κ^2/m[1/6-2/3p]. This leads to the result κ=(-6mE)^1/2 for T=0 (p=2) as well as the known result κ=(-2mE)^1/2 for the classical phonon limit (α large, p=1). § ADDITIONAL REMARKS §.§ Momentum-dependent spectral density Using either the ITDH or FWF approximation, one finds that for a given energy E in the low-energy tail, the tunneling density of states is dominated by a particle wave function of the form ϕ(x) = f (x- x_0), where f has a fixed shape, and x_0 the position of a local minimum of the fluctuating potential. As noted in Refs. <cit.>, this suggests a simple approximation for the momentum-dependent spectral density A^(1) (k,E) ≈ | f̃ (k)|^2 D(E) , where f̃ is the Fourier transform of f(x): f̃ (k) = ∫ dx e^-ikx f(x) . (Here we use the continuum normalization, with ∫ |f|^2 dx = 1). As in the case of a random potential due to impurities, however, this approximation breaks down if k becomes too large. Because f(x) is an analytic function of position, its Fourier transform will fall off exponentially for |k | > l^-1, where l is a measure of the spatial width of the wave function. Then for sufficiently large k, we can obtain a larger contribution to the spectral density from processes where the injected particle emits or absorbs a phonon with wave vector ≈± k in order to bring it into the momentum region where A^(1) is largest. The contributions of these processes to the imaginary part of the particle self-energy may be written as Σ (k,E) = L/2π^2∫ dk' |λ_k'|^2 [(n_k'+1 ) A^(1)(k-k', E-ω_k' ) + n_-k' A^(1)(k-k', E+ω_-k') ] . Because the integrand will be significant only when |k'-k| ≤κ≪ |k|, we may replace k' by k in ω_k', etc., and we may bring these factors outside the integral. This leads to a contribution A^(2)(k,E) = π^-1 [ E -ϵ_k - Σ (k,E) ]^-1 = γ_k ^2 [ (1 + n_k) D(E - ω_k) + n_k D(E+ ω_k)] /2 ρω_k (ϵ_k - E)^2 . For intermediate values of k, we should approximate the spectral density by the larger of (<ref>) and (<ref>). Approximation (<ref>) will be particularly important in the case of an indirect absorption edge. For a semiconductor with an indirect band gap, such as Si, where the exciton binding energy is small and the electron-phonon interaction is too weak to produce self-trapping, the low-energy tail of the indirect optical absorption edge can be reproduced by an analogous formula involving a transition of an electron from a state near the valence band maximum, to a state near the conduction band minimum, with emission or absorption of a single phonon <cit.>. §.§ Three-dimensional systems Three-dimensional systems can differ from one-dimensional systems in several ways, which may require adjustments to the methods described above. One important difference is that in three dimensions, it is necessary for the particle-phonon coupling to exceed a critical value in order for self-trapping to occur in limit of large nuclear masses, whereas in one-dimension, self-trapping occurs for arbitrarily weak coupling. As in one-dimension, we expect that important quantum corrections will appear primarily in the energy range E_min < E < E_0, where E_0 is the nominal bottom of the free-particle band, and E_B = E_0 - E_min is the binding energy due to self trapping. Further, we want to have E_B > ω_ph> T, for our methods to apply and for quantum corrections to be important. If the coupling is below the critical value, one obtains E_B = 0, so these conditions cannot be satisfied. Another issue concerns the contribution of short wave length phonons. In three dimensions, the phonon contribution to the particle self-energy has a strong ultra-violet divergence in the continuum limit, so the contribution of short-wavelength phonons may need to be taken into account even in situations where the width of typical particle wave function ϕ(x) at the energy of interest is large compared to the lattice constant. (When quantum fluctuations are taken into account, there is also an ultraviolet divergence in one-dimension, as discussed in Appendix <ref>, but that divergence is logarithmic and is unlikely to be important in practice.) Because the ultraviolet-divergent contribution is only weakly dependent on the particle energy, we may treat it, to a first approximation, as a constant downward shift in the particle energies, which can be taken into account by a (temperature-dependent) redefinition of the threshold energy E_0. A more detailed discussion of how to treat this energy shift may be found in <cit.>, where a similar ultraviolet divergence was encountered in their analysis of the density of states for an electron in a three-dimensional Gaussian white noise potential. Additional problems may arise when one tries to extract the density of states D(E) from the imaginary time correlation function C(τ) obtained from either the ITDH or FWF approximation, using the procedure discussed in Sec. <ref>. At least in the classical phonon limit, we know from the Halperin–Lax analysis that at least in limit of classical phonons and wave functions wide on the scale of the lattice constant, there will be a region of energy in the low-energy tail where ln D(E) ∝ -|E-E_0|^1/2, so d^2 [ln D(E ) ]/dE^2 >0, which violates the requirements of Sec. <ref>. This difficulty may be avoided, however, if one adopts a modified procedure, described below, where the Laplace transform is inverted at an earlier stage of the calculation. In two dimensions, for Gaussian white noise in the continuum limit, the HL analysis predicts ln D(E) ∝ -|E-E_0|, so that d^2 ln D / dE^2 ≈ 0. This case is marginal, and it is unclear whether one can apply directly the ITDH method to this case. In Ref. <cit.>, results of a numerical calculation were presented for D(E) of a two-dimensional model of a particle interacting with quasiclassical frozen acoustic phonons. However, that calculation did not extend far enough into the low-energy tail to warrant comparison with an analysis using the methods of the present paper. §.§.§ Modified procedures In the Modified ITDH procedure, one chooses an arbitrary initial phonon configuration α and uses the imaginary-time-dependent Hartree equations to calculate the function C_α(τ) as described Sec. <ref>. Now, however, we use the procedure of Sec. <ref> to find the inverse Laplace transform of C_α(τ) at fixed α, which gives the conditional density of states, D_ α (E) = ∑_mn⟨α|m⟩mψ_x_0nnψ^†_x_0α × δ(E-E_n+E_m) , where the sum is over eigenstates of the Hamiltonian with one particle present for the state n and no particle present for the state m. We expect that the conditions of Sec. <ref> will be satisfied by C_α(τ) for fixed α, so there should be no difficulty inverting the Laplace transform at this stage. We now have D(E) = ∫ dα w_α D_α (E) . Then, we may approximate the density of states by the value of the integrand in this equation when α is chosen to maximize its value. [Alternatively, we may approximate the density of states choosing α in a way that maximizes w_αC_α(τ) for the value of τ that corresponds to the target value of E.] We expect that in most cases either of these choices will lead to a good approximation for D(E) in the low-energy tail, and in cases where the averaged correlation function C(τ) does satisfy the conditions for the procedure of Sec. <ref>, there should be close agreement between the modified ITDH and the original ITDH approximations. It appears that difference between the two procedures will affect only the pre-exponential factors. As in the previous sections, one should properly include an entropy prefactor S(E), which we have here set equal to 1. We may proceed in a similar manner when employing the FWF. The function C_ϕ_tr (τ) defined by (<ref>) is the Laplace transform of a function D_ϕ_tr (E), which we may consider to be an approximation to the actual density of states D(E). One should be able to obtain the function D_ϕ_tr (E) from C_ϕ_tr (τ) with good accuracy and without much difficulty, using the procedure outlined in Sec. <ref>. Having obtained the estimated density D_ϕ_tr (E) for various choices of the wave function ϕ_tr, we choose, for each E, the wave function that maximizes the estimate for that energy. This defines a modified FWF approximation: D(E) = max_ϕ_tr [D_ϕ_tr (E) ]. This should be compared to the FWF approximation of Sec. <ref>, where we did not compute the functions D_ϕ_tr(E) but rather obtained the density of states by taking the inverse Laplace transform of the entire function C(τ). Again, one should get more accurate results if one can include an estimate of the entropy prefactor S(E), which we have here set equal to 1. As an alternative to inverting the Laplace transform in the Modified FWF procedure, if one wishes to obtain a more accurate value for the function D_ϕ_tr (E), one may work directly in the energy regime, following the prescription of Hopfield <cit.>, which is described in Appendix <ref> below. This method is not restricted to the low-energy tail, and it can achieve, in principle, an arbitrary degree of accuracy. However, in the low-energy tail, an approximate inversion of the Lax formula using the method of Sec. <ref> is simpler and probably adequate for the level of approximtaion already implicit in the FWF method. We remark that there is at least one case where the modified FWF approximation taken literally will lead to nonsensical results. In a model where the lattice vibrations are dispersionless optical phonons, with a single frequency ω_0, the function D^0_tr(E) will contain a series of δ-functions at energies separated by multiples of ω_0. Choosing ϕ_tr at each E to maximize D^0_tr(E) will simply give an estimated density of states that is infinite at all energies. However, we expect that this pathology will not be a cause for worry when there is at least a moderate amount of dispersion in the phonon spectrum. We expect that in most cases, when D(E) satisfies the condition d^2 ln D /dE^2 < 0, the difference between the modified procedures and the original ITDH or FWF approximation will not be too great. For the one-dimensional continuum model studied above, we find that the modified FWF result for D(E) only differs from the result of the original FWF approximation by a constant pre-exponential factor of (3/2)^1/2. (See Appendix <ref>.) § SUMMARY In Sections <ref> to <ref> of this paper, we introduced two related approximation schemes for calculating the momentum-dependent spectral density A(k,E) and the tunneling density of states D(E) in the low-energy tail for the model of a single injected particle coupled to a thermal bath of phonons. In both schemes, we developed an approximation for imaginary-time correlation function C(τ), which is the Laplace transform of D(E), and we discussed how D(E) can be efficiently extracted from C(τ) for energies E in the low-energy tail. In particular, we obtained a one-to-one relation between energies E and imaginary times τ_E such that D(E) is determined by C(τ) and its first two derivatives at τ = τ_E. In the ITDH approach, defined in Sec. <ref>, we proposed a imaginary-time-dependent Hartree appoximation to calculate the imaginary-time correlation function starting from an initial state that is in an arbitrary coherent phonon state before the particle is injected. To obtain C(τ), one should then calculate the weighted average of this correlation function over a thermal distribution of initial states. However, in the low-energy tail, corresponding to large values of τ, we can get a good first approximation to C(τ) considering only the single initial state that gives the largest contribution to the average. In principle, corrections to this approximation can be obtained by using second-order perturbation theory to account for deviations of the initial phonon configuration from the optimal coherent state, as well as corrections arising from the difference between the full Hamiltonian and the Hartree approximation used to calculate the imaginary-time evolution. The FWF approximation, introduced in Sec. <ref>, is a simplification of the ITDH, in which for a given choice of τ, we ignore the time-dependence of the particle portion of the wave function for imaginary times τ' < τ by assuming a fixed trial wave function ϕ_tr(x). The phonon configuration is assumed to vary with τ', however, driven by the coupling to the mean particle density |ϕ_tr(x)|^2. Then, for each value of τ, we choose a trial wave function that maximizes the estimate of C(τ). The FWF approach may be further simplified by restricting the trial wave function to a form controlled by a small number of variational parameters and then choosing the values of these parameters so as to maximize C(τ). As noted in Sec. <ref>, in the classical phonon limit, where the nuclear mass is taken to infinity, so that the relevant phonon frequencies are small compared to the temperature T, the problems under consideration reduce to calculations of the density of states or the spectral density for a particle in a random potential with a Gaussian statistical distribution. In the low-energy tail, to leading order (on a logarithmic scale), the ITDH and FWF approximations become equivalent to each other in the classical limit, and their results coincide, at this level, with the results obtained more than five decades ago for a particle in a random potential. In Sec. <ref>, we presented an application of these methods to a one-dimensional model of a particle interacting via a deformation potential with acoustic phonons in the continuum limit. We presented results of numerical calculations using the FWF approximation for a selected set of parameters, γ, ρ, and T controlling the particle-phonon coupling strength, the nuclear mass density, and the temperature. The parameters were chosen such that the phonon frequencies were small compared to the self-trapping energy for the particle in the classical phonon limit, but the ratio between T and the relevant phonon frequencies could take various values. Indeed, it is under these conditions that we expect the ITDH and FWF approximations to be most interesting. At least in this parameter region, we found that quantum fluctuations arising from a finite nuclear mass had little effect when the relevant phonon frequencies were small compared to T, but tended to increase the density of states at low energies when the phonon frequencies were larger than T. In the case where the mass density is held fixed and T is decreased, we found that D(E) remains finite in the limit T → 0 for E greater than E_min, the ground state energy of a self-trapped particle in the classical phonon limit, but D(E)= 0 at T=0 for E < E_min. We intend to present results of an application of the ITDH to the one-dimensional continuum model in a future publication. The calculations presented in Sec. <ref> were confined to examples with relatively strong particle-phonon coupling, where it was sensible to consider a case where the self-trapping energy is large compared to the relevant phonon frequencies. In the opposite case, where the phonon frequencies are large compared to the self-trapping energy, the description may be quite different. In this case, the quantum ground state will be a lightly bound polaron, which is highly mobile with a slightly renormalized effective mass, and its kinetic energy must be taken into account at low temperatures. At high temperatures, the classical phonon approximation can still be made, and the methods for treating a particle in a Gaussian random potential could be used. However, a good description of quantum corrections to the low-temperature low-energy tail for weak particle-phonon interactions requires further investigation. Several additional topics were discussed in Sec. <ref>. In Subsection <ref> we showed how the momentum-dependent spectral density A(k,E) can be obtained along with D(E) in either the ITDH or FWF approximation. In Subsection <ref> we discussed the modifications that must be made if one wishes to apply either the ITDH or FWF approximation to a three-dimensional system. In Appendix <ref> below, we present a detailed derivation of the imaginary-time-dependent Hartree equations of motion used in the ITDH approximation. In Appendix <ref>, we discuss corrections to the ITDH that affect the pre-exponential factors in D(E). In particular, we discuss the correction that is of greatest importance in the case of a continuum system in the classical phonon limit. This correction arises from fluctuations of the frozen phonon state in which the potential well retains its optimum form but the center of the well is displaced slightly from the position of the injected particle. Several other topics are treated in additional appendixes. Although our study of A(k,E) and D(E) was largely motivated by the problem of optical absorption by an exciton in the presence of a bath of phonons, there are other applications, at least in principle. As one example, the tunneling density of states measured in an ideal scanning tunneling microscopy (STM) experiment for an electron injected into an empty band in a two-dimensional insulator should be proportional to the quantity D(E) calculated for that system. Similarly, an inverse angle-resolved photoemission spectroscopy (ARPES) experiment could give a measure of A(,E). As a practical matter, however, it is not clear whether one can achieve the sensitivity and energy resolution necessary to probe the low-energy tail region where the methods described above would be directly applicable. Of course, one must also contend with the influence of impurities in this region, and in the case of an STM measurement, one would have to account for the perturbation caused by the presence of a scanning tip. In addition, it is difficult to perform an STM measurement in a completely empty band, as it is necessary for the target to have at least some lateral conductivity. The spectral density A(,E) or the tunneling density of states D(E) for an occupied electron band can be measured, respectively, by an ARPES or STM experiment. The problem in this case is that the effects of electron-electron interactions are likely to be larger than the effects of electron-phonon interactions. A quantity analogous to the spectral density of a particle interacting with phonons in a crystal can arise for an impurity atom injected into an atomic Bose condensate <cit.>. At low energies, the important excitations of the Bose condensate are phonons, and their interaction with the impurity atom may have a form similar to the one considered here, at least in the case of weak coupling. However, the analogy breaks down when coupling to the impurity is strong <cit.>, and it is not clear whether one can achieve a regime where a low-energy tail exists and our methods could be directly applicable. More generally, however, we expect that the type of analysis exemplified by our ITDH procedure may have applicability to a variety of problems where coupling of a particle to degrees of freedom other than phonon modes is important. For example, excitations about a Fermi sea of electrons may be treated as a set of harmonic-oscillator modes in some circumstances. Also, magnetic excitations in spin systems may often be treated as independent harmonic oscillators. Appropriate generalizations of the ITDH procedure might be useful for calculations outside the low-energy tail, which would accordingly extend the applicability of the general approach. In principle, our methods could be used when the initial phonon state is not a state of thermal equilibrium, provided it can be described by a density matrix that commutes with the Hamiltonian in the absence of the injected particle and is, therefore, independent of time. § ACKNOWLEDGMENTS The authors are indebted to Eric J. Heller for discussions on possible applications of the coherent state description of phonons to systems of phonons interacting with electrons, which served as a major stimulus for this work. One of the authors (B.I.H.) is pleased to acknowledge his debt to John J. Hopfield, who recommended to him, in 1963, to study a model of excitons interacting with phonons, in an effort to explain the experimentally observed low-energy tail of optical absorption in a variety of insulators. We thank the National Science Foundation for supporting this research, through the STC Center for Integrated Quantum Materials (CIQM), NSF Grant No. DMR-1231319. § EQUATIONS OF MOTION FOR IMAGINARY-TIME-DEPENDENT HARTREE APPROXIMATION The unnormalized many-body state |Ψ(τ)⟩ must satisfy the imaginary-time-dependent Schrödinger equation ∂|Ψ⟩/∂τ= -H|Ψ⟩. Assuming that the factorization approximation is valid, the state |Ψ(τ)⟩=a^-1/2 R(τ) Φ_τ^†|β(τ)⟩ should satisfy ⟨Ψ|∂Ψ/∂τ⟩ = ⟨∂Ψ/∂τ|Ψ⟩ = -ΨHΨ and ⟨Ψ|=⟩a^-1(R(τ))^2. Then, one obtains ∂/∂τln[(⟨Ψ|)⟩^-1/2] = ΨHΨ/⟨Ψ|⟩ = E_ϕ+∑_k ω_k β_k^2, which gives d R/d τ = -R(E_ϕ+∑_k ω_k β_k^2). The Heisenberg equation of motion for a_k gives da_k/dτ=-a_kH=-ω_k a_k-a∑_x ψ_x^†ψ_xλ_k e^-ikx. Then, in the presence of a particle, dβ_k/dτ = -ω_k β_k-λ_kϕ^2_k. Without a particle, dβ_0k/dτ=-ω_k β_0k. Multiplying the Schrödinger equation (<ref>) by ⟨β(τ)|ψ_x, and using the equation (<ref>) for d R/d τ, one obtains dϕ(x,τ)/dτ = - [H_e(x)+V(x,τ)-E_ϕ(τ)-iQ(τ)] ×ϕ(x,τ), where H_e(x) is a position representation of the electronic kinetic energy operator and Q(τ) = i⟨β(τ)|d/dτ|β(τ)⟩ = -∑_kβ_k^*(τ)dβ_k(τ)/dτ = ∑_kβ_k^*(τ)λ_kϕ^2_k, where we have used |β_k(τ)⟩=e^-β_k(τ)^2/2e^β_k(τ)a_k^†|0⟩_k and Eq. (<ref>). Note that λ_-k=λ_k^* due to time-reversal symmetry and ϕ^2_-k=ϕ^2_k^* since ϕ(x)^2 is real. If we choose an initial configuration such that α_-k=α_k^*, then β_-k(τ)=β_k^*(τ) and β_0,-k(τ)=β_0k^*(τ) since the equations of motion preserves the relations. Then, the purely imaginary term vanishes, i.e., Q(τ)=0, meaning that the propagated electronic wave function ϕ(x_0,τ) is real if the initial wave function ϕ(x_0,0) is real. Furthermore, the inner product ∏_kβ_0k(τ)β_k(τ) is real for this initial configuration. Thus, C_α(τ) is real for the initial configuration. If we choose an initial configuration such that C_α(τ) is complex, there will be a complex-conjugate initial configuration, with the same weight w_α, which will give rise to the complex-conjugate value of C_α(τ). Therefore, if there is a unique initial configuration that maximizes w_αC_α(τ), it must be real, with an initial condition that satisfies α_-k=α_k^*. We believe that this will generally be the case. We expect that the imaginary-time-dependent Hartree approximation should become asymptotically exact in the limit where T and the phonon frequencies (∝ M^-1/2) go to zero. In this limit, evolution of the phonon coordinates is slow, and the particle wave function will adiabatically follow the ground state wave function for a particle in the potential well produced by the phonon configuration. This is the limit where a Born-Oppenheimer separation is valid, and corrections to the Hartree approximation become small, despite the fact that the imaginary times τ of interest grow proportional to T^-1 or M^1/2. (Note that there is no Fock exchange term in our problem, since we have only one mobile particle.) § HIGHER-ORDER CORRECTIONS In order to improve our estimates of the density of states in the low-energy tail, we need to examine the pre-exponential factors S(τ), introduced in earlier sections, which we have so far ignored. As mentioned previously, the most important corrections to the ITDH approximation can be calculated, in principle, by treating the difference between the actual Hamiltonian and the imaginary-time-dependent Hartree approximation and the deviation of the actual starting configuration α from the optimum configuration as small perturbations, whose effects one can estimate using second-order time-dependent perturbation theory. We will not attempt to implement this procedure in the present paper. In the classical phonon limit, the analysis is simplified, because for long times τ, for a given potential configuration V(x), the correlation function C_V(τ) is determined by properties of the electronic ground state in this potential. Then, the corrections to the ITDH estimation of C(τ) can be calculated using time-independent perturbation theory to account for deviations of the actual potential V(x) from its optimum form. For a continuum model, it turns out that the most important correction arises from fluctuations where the potential well retains its ideal form but where the center of the well is displaced slightly from the position x_0 where the particle is injected. To be more precise, let us assume that for a given value of τ, the optimum potential has a form V_opt(x) = u (x-x_0) where the shape of u is independent of x_0, and let us write the ground state wave function as ϕ_B(x) = f(x-x_0) . The ground state in a potential well can always be chosen to be a real-valued function with no nodes, and it is necessarily non-degenerate. Moreover, we expect by symmetry that the optimum potential will have a minimum at x=x_0 and will be symmetric about that point, so that the ground state wave function in the well will have a maximum at x_0. Let C_0(τ) be the estimate of C(τ) obtained from (<ref>) with the optimum choice of V. We may now estimate the contribution to C(τ) from a displaced potential of the form V(x) = u(x - x_0 -s), for s ≠ 0. The displacement s will have no effect on the weight factor w_α in Eq. (55) nor on the binding energy E_B, but it will reduce the value of |ϕ_B(x_0)|^2 by a factor |f(s)/f(0)|^2. We may obtain an improved estimate of C(τ) by integrating over the displacement s, namely C_1(τ) = S_1(τ) C_0(τ) , S_1(τ) = ν_0 ∫ ds |f(s) / f(0)|^2 , where ν_0 is the density per unit length of independent choices of s (Here, we have assumed that the bound state wave function is broad on the scale of the lattice constant a, so we have taken the continuum limit a → 0, and we have replaced the sum over positions by an integral). We may determine ν_0 as follows. Consider a set of potentials of the form V_η (x) = u(x-x_0 ) - η u' (x-x_0) , with a parameter η. Since the probability of V_η is controlled by the weight function w_α, the variable η will have a Gaussian distribution of the form p(η) = (2 πσ_η^2)^-1/2 e ^- η^2 / 2 σ_η^2 , with σ_η^-2 = ∫ dx dx' u'(x) G(x-x') u'(x') = ∫_- π /a ^π / adk/2πρω_k /|γ_k|^2 (n_k+1/2) k^2ũ(k)^2, where as was defined in Eq. (<ref>), G(x-x') = ∫_- π /a ^π / adk/2 πρω_k /|γ_k|^2 n_k e^i k (x-x') , and ũ (k) = ∫ dx e^-ikx u(x) . But a small nonzero value of η is equivalent to a displacement of the potential by an amount s = η, so we must have ν_0 = lim_η→ 0 p( η) = (2 πσ_η^2)^-1/2 . Since the wave function f(s) is normalized to unity, we obtain S_1(τ) = (2 πσ_η^2 |f(0)|^4 )^-1/2 . The corrected correlation function C_1 leads to a density of states similar to that obtained in Ref. <cit.> using a minimum counting procedure, which approximated D(E) by the density of local minima of the smoothed potential V_s with V_s(x) = θ - E. The factor S_1 represents the correction imposed by the requirement that V_s is a local minimum at a point x, on top of the requirement that the value of V_s(x) is equal to θ-E. For the acoustic phonon model discussed in Sec. <ref>, u(x)=-κ^2/m^2(κ x), ũ(k)=- kπ/m(kπ/2κ), and γ_k=γk. For classical phonon, n_k≈ T/ω≫1, so Eq. (<ref>) gives σ_η^-2≈ K /γ^2 T∫_- ∞ ^∞dk/2π k^2ũ(k)^2 = K /γ^2 T16κ^5/15m^2, where similar approximations were used as in Eq. (<ref>). Then, in Eq. (<ref>) using f(0)=(κ/2)^1/2 [c.f. Eq. (<ref>)], one finds S_1 = ( 32Kκ^3/15π m^2 γ^2T)^1/2 . Although the correction S_1 was derived for the ITDH approximation, it seems reasonable to apply it also to the FWF approximation. Doing this, one obtains Eq. (<ref>), which only differs from the exact asymptotic value (<ref>) by a constant factor of (2/15)^1/2. If one applies the correction to the modified FWF approximation, then, from (<ref>) and (<ref>), one obtains S_1(E) D_MFWF(E) = 1/√(5)4κ^2/π m ξ e^-4κ^3/3m^2ξ, which differs from the exact asymptotic value (<ref>) by a factor of 1/√(5) and is the same as the density of states in Gaussian white noise obtained in Eq. (4.9) of Ref. <cit.> using the approximtion that counted the minima of the smoothed potential. Contributions to the pre-exponential factor beyond those included in S_1 come from fluctuations in V(x) that are orthogonal to u(x-x_0) and u'(x-x_0). As discussed in <cit.>, for the Gaussian white noise potential, these corrections lead to a finite numerical correction S_2 to the density of states in the low-energy tail, which is independent of the energy or the strength of the disorder potential. Moreover, a calculation based on second order perturbation theory is sufficient to obtain results which coincide with the exact asymptotic form of the density of states (<ref>) <cit.>. When quantum fluctuations are taken into account, corrections arising from short-wavelength phonons need to be handled with additional care. As mentioned above, these fluctuations lead to an ultraviolet divergence in the self-energy in the continuum model, even in one dimension. Specifically, in second order perturbation theory, one obtains a self-energy Σ (k, E) ≈ - ∫ dk (2n_k+1) |γ_k|^2 / 4 πρω_k (ϵ_k - E) . In the classical phonon limit, where 2n_k+1 → 2T/ω_k, the integral converges at large k. When quantum fluctuations are included, however, the integral has a logarithmic divergence at large k, giving a contribution to the self-energy of form ( mγ^2/ πρ c_s) ln a, where c_s∝ρ^-1/2 is the sound velocity and a is the short-distance cutoff. In situations where the resulting self-energy is large, it may be most convenient to treat the contribution from short-wavelength fluctuations as a downward shift of the bare energy spectrum ϵ_k, while including the remaining fluctuations in a calculation of the pre-expoential factors in the density of states. § POTENTIAL FLUCTUATIONS AND GAUSSIAN WHITE NOISE From Eq. (<ref>), one obtains the operator-valued potential V̂(x)=∑_k ( λ_k a_k^† + λ_-k^* a_-k ) e^-ikx , and the result V̂(x)V̂(x+δ x)_x ≡ L^-1∫ dx V̂(x)V̂(x+δ x) = ∑_kγ_k^2/2ρ L ω_k (a_k+a_-k^†)(a_k^†+a_-k)e^-ikδ x. Then, the average of the correlation function over the thermal ensemble of phonons at temperature T is V̂(x)V̂(x')_T = ∑_kγ_k^2/2ρ L ω_k (2n_k+1)e^-ik(x-x') = ∫_-π/a^π/adk/2πγ_k^2/2ρω_k (2n_k+1)e^-ik(x-x'). If one uses the quasiclassical potential V_qc(x)=⟨α|V̂(x)|α⟩=∑_k ( λ_k α_k^* + λ_-k^* α_-k ) e^-ikx , its spatial autocorrelation function averaged over the thermal ensemble of phonons is V_qc(x)V_qc(x')_T = ∑_kγ_k^2/ρ L ω_k n_ke^-ik(x-x') = ∫_-π/a^π/adk/2πγ_k^2/ρω_k n_ke^-ik(x-x'), missing the quantum fluctuation contribution to Eq. (<ref>)<cit.>. In the classical phonon limit, ρ→∞ while fixing T, n_k=1/e^ω_k/T-1≈T/ω_k∝√(ρ)≫1, which gives V̂(x)V̂(x')_T≈∫_-π/a^π/adk/2πγ_k^2T/ρω_k^2 e^-ik(x-x'). §.§ Continuum acoustic phonon model For the continuum acoustic phonon model in Sec. <ref>, γ_k=γk and ω_k given by Eq. (<ref>), the autocorrelation in the classical phonon limit [Eq. (<ref>)] becomes V̂(x)V̂(x')_T^cont≈γ^2T/K∫_-π/a^π/adk/2π(ka/2)^2/sin^2(ka/2)e^-ik(x-x'). Then, its integration gives ∫_-∞^∞d(x-x') V̂(x)V̂(x')_T^cont = γ^2T/K. §.§ Gaussian white noise For Gaussian white noise, the potential spatial autocorrelation is given by V(x)V(x')=ξ/2δ(x-x') Then, its integration gives ∫_-∞^∞d(x-x') V(x)V(x') = ξ/2 By comparing Eqs. (<ref>) and (<ref>), one obtains the relation between the parameters in the Gaussian white noise model and the continuum model, ξ/2=γ^2T/K. §.§ Gaussian statistics The Fourier component of the quasiclassical potential (<ref>) is Ṽ(k)=∫ dx e^-ikxV_qc(x) = Lλ_k^*(α_k+α_-k^*). Thus, if the function V(x) is specified, the combinations α_k + α^*_-k are thereby determined, but no information is gained about the quantities η_k = α_k - α^*_-k. Note that ∑_k α_k^2/n_k = ∑_k α_k+α_-k^*^2+α_k-α_-k^*^2/4n_k = ∑_k [G̃(k)Ṽ(k)^2/2L + η_k^2/4n_k], where we made use of Eqs. (<ref>) and (<ref>), and G̃(k)=∫ dx e^-ikxG(x). Then, w_α [C.f. Eq. (<ref>)] can be decomposed into the product of functions of Ṽ(k) and η_k, meaning Ṽ(k) and η_k are independent random variables. If we integrate w_α over the variables η_k, we obtain a probability distribution for the function V(x): w_qc = Z_qc^-1exp[-∑_k G̃(k)Ṽ(k)^2/2L] = Z_qc^-1exp[-a^2/2∑_x,x' V(x) G(x-x') V(x')], where Z_qc is a normalization constant. § MODIFIED FWF APPROXIMATION If we apply the Modified FWF approximation to the one-dimensional Gaussian white noise potential, the imaginary-time correlation function C_ϕ_κ(τ) can be analytically continued to the real-time correlation function through the relation τ=it: C̃_ϕ_κ(t)=C_ϕ_κ(it)= ϕ_κ(x_0)^2e^-it E_e^ϕ_κ-σ_ϕ_κ^2 t^2/2. Since this is a simple Gaussian form, we can easily take the Fourier transform to obtain the corresponding estimate of the density of state [c.f. Eq. (<ref>)]: D_ϕ_κ(E) = κ/2e^-3(E- κ^2/6m)^2/ξκ(3/πξκ)^1/2. We get precisely the same result if we use Laplace's method to obtain the inverse Laplace transform of C_ϕ_κ(τ) directly [C.f. Eq. (<ref>)]. Note that Eq. (<ref>) predicts E = E_e^ϕ_κ-σ_ϕ_κ^2τ_E, giving a linear relation between τ_E and E: τ_E=(E_e^ϕ_κ-E)/σ_ϕ_κ^2. Now the density of states D_ϕ_κ(E) can be maximized with respect to the variational parameter κ. The value of κ that maximizes the density of states is found from dln D_ϕ_κ(E)/dκ = d/dκ[ lnκ - 1/2lnσ_ϕ_κ^2 -(E-E_e^ϕ_κ)^2/2σ_ϕ_κ^2] =0. Since this equation is second order in E, it can be solved for E<0: E=-κ^2/6m-√((κ^2/3m)^2-γ^2 T/3Kκ). Inverting this equation for κ gives the optimal κ_o(E) that maximizes the density of states for the given energy E, from which we obtain D_MFWF(E) ≡max_κ[D_ϕ_κ(E)] = D_κ_o(E)(E). In the low-energy limit E→-∞, the optimal κ_o(E) reduces to the Halperin–Lax result κ_o(E)→κ_E=(-2mE)^1/2<cit.>. Then, the asymptotic form of D_MFWF can be obtained: D_MFWF(E) ≈(3κ_E/4πξ)^1/2e^-4κ_E^3/3m^2ξ for E≪0. This result differs from D_FWF(E), given by Eq. (<ref>), by a factor of (3/2)^1/2. § HOPFIELD'S METHOD FOR TREATING THE FRANCK-CONDON PROBLEM An application of Hopfield's method to our problem proceeds by introducing a function D^η_ϕ_tr (E) which is equal to the trial density of states D_ϕ_tr (E) for a problem where all the coupling constants C_k are multiplied by a constant η^1/2, with 0 ≤η≤ 1. Following Hopfield's arguments, D_ϕ_tr may be obtained by solving the “transport equation”∂ D^η_ϕ_tr (E)/∂η = ∫ d E' K̃ (E-E') D^η_ϕ_tr (E') , with the kernel K̃ (ϵ) = ∑_k |C_k|^2 /ω_k^2 [ (n_k +1) δ (ϵ - ω_k) + n_k δ (ϵ + ω_k ) + ( ω _k ∂/∂ϵ - 2n_k -1 ) δ(ϵ) ] , and the initial condition D^0_ϕ_tr (E) = |ϕ_tr (x_0) |^2 δ (E-E^ϕ_e) . One then identifies D_ϕ_tr (E) with D^1_ϕ_tr (E).
http://arxiv.org/abs/2307.02159v1
20230705100053
DiffFlow: A Unified SDE Framework for Score-Based Diffusion Models and Generative Adversarial Networks
[ "Jingwei Zhang", "Han Shi", "Jincheng Yu", "Enze Xie", "Zhenguo Li" ]
stat.ML
[ "stat.ML", "cs.CV", "cs.LG", "math.AP" ]
lemmaLemma theoremTheorem propositionProposition definitionDefinition corollary[theorem]Corollary remarkRemark conditionCondition assumptionAssumption †] Jingwei Zhang †] Han Shi †] Jincheng Yu †] Enze Xie †] Zhenguo Li []Department of Computer Science and Engineering HKUST [†]Huawei Noah's Ark Lab [ ]{jzhangey}@cse.ust.hk [ ]{shi.han, li.zhenguo}@huawei.com DiffFlow: A Unified SDE Framework for Score-Based Diffusion Models and Generative Adversarial Networks [ ====================================================================================================== Generative models can be categorized into two types: explicit generative models that define explicit density forms and allow exact likelihood inference, such as score-based diffusion models (SDMs) and normalizing flows; implicit generative models that directly learn a transformation from the prior to the data distribution, such as generative adversarial nets (GANs). While these two types of models have shown great success, they suffer from respective limitations that hinder them from achieving fast sampling and high sample quality simultaneously. In this paper, we propose a unified theoretic framework for SDMs and GANs. We shown that: i) the learning dynamics of both SDMs and GANs can be described as a novel SDE named Discriminator Denoising Diffusion Flow (DiffFlow) where the drift can be determined by some weighted combinations of scores of the real data and the generated data; ii) By adjusting the relative weights between different score terms, we can obtain a smooth transition between SDMs and GANs while the marginal distribution of the SDE remains invariant to the change of the weights; iii) we prove the asymptotic optimality and maximal likelihood training scheme of the DiffFlow dynamics; iv) under our unified theoretic framework, we introduce several instantiations of the DiffFLow that provide new algorithms beyond GANs and SDMs with exact likelihood inference and have potential to achieve flexible trade-off between high sample quality and fast sampling speed. § INTRODUCTION Generative modelling is a fundamental task in machine learning: given finite i.i.d. observations from an unknown target distribution, the goal is to learn a parametrized model that transforms a known prior distribution (e.g., Gaussian noise) to a distribution that is “close” to the unknown target distribution. In the past decade, we have witnessed rapid developments of a plethora of deep generative models (i.e., generative modelling based on deep neural networks): starting from VAEs <cit.>, GANs <cit.>, Normalizing Flows <cit.>, and then recent developed score-based diffusion models (SDMs) <cit.>. These deep generative models have shown amazing capability of modelling distributions in high dimensions, which is difficult for traditional “shallow” generative models such as Gaussian Mixture Models. Despite a large family of deep generative models has been invented, they are lied in two categories from the perspective of sampling and likelihood inference: explicit generative models are family of generative models that define explicit density forms and thus enable exact likelihood inference <cit.> by the well-known Feynman-Kac formula <cit.>, and typical examples include score-based diffusion models and normalizing flows; implicit generative models such as GANs, on the other hand, directly learn a transformation from the noise prior to the data distribution, making the closed-form density of the learned distribution intractable. In this work, we take GANs in the family of implicit generative models and SDMs in the family of explicit generative models as our objective of study, as they dominate the performance in its respective class of generative models. GANs are trained by playing a minimax game between the generator network and the discriminator network. It is one of the first well-known implicit generative models that dominants the image generation task for many years. The sampling process of GANs is fast since it only requires one-pass to the generator network that transforms the noise vector to the data vector. The downside of GANs, however, is that the training process is unstable due to the nonconvex-nonconcave objective function and the sample quality is inferior compared to the current state-of-the-art score-based diffusion models <cit.>. Different from GANs, SDMs achieve high quality image generation without adversarial training. SDM <cit.> is an explicit generative model that defines a forward diffusion process that iteratively deteriorates the data to random noise and the learning objective is to reverse the forward diffusion process by a backward denoising process. The equivalence between denoising and score matching is already known in the literature <cit.> and hence it is where the term “score” comes from. Nevertheless, the iterative natures make both the training and sampling of SDMs much slower compared to GANs. While huge progress has been made in the respective field of GANs and diffusion models, little work has been done on linking and studying the relationship between them. In this work, we aim to answer the following research question: Can we obtain a unified algorithmic and theoretic framework for GANs and SDMs that enables a flexible trade-off between high sample quality and fast sampling speed with exact likelihood inference? The goal of this paper is to give an affirmative answer to the above question. The key observation is that the learning dynamics of both SDMs and GANs can be described by a novel SDE named Discriminator Denoising Diffusion Flow (DiffFlow) where its drift term consists of a weighted combination of scores of current marginal distribution p_t(x) and the target distribution. By carefully adjusting the weights of scores in the drift, we can obtain a smooth transition between GANs and SDMs. Here, the we use the word “smooth” to imply that the marginal distribution p_t(x) at any time t remains unchanged by the weights adjustments from GANs to SDMs. We name this property the “Marginal Preserving Property” which we would give a rigorous definition in later sections. We also provide an asymptotic and nonasymptotic convergence analysis of the dynamics of the proposed SDE under some isoperimetry property of the smoothed target distribution and design a training algorithm with maximal likelihood guarantees. Under our unified algorithmic framework, we can provide several instantiations of DiffFlow that approximate the SDE dynamics by neural networks and enable a flexible trade-off between high sample quality and fast sampling speed. § BACKGROUND §.§ Score-Based Diffusion Models SDMs are type of generative models that are trained by denoising samples corrupted by various levels of noise. Then the generation process is defined by sampling vectors from a pure noise and progressively denoising noise vectors to images. <cit.> formally describes the above processes by a forward diffusion SDE and a backward denoising SDE. Specifically, denote the data distribution by q(x). By sampling a particle X_0∼ q(x), the forward diffusion process {X_t}_t∈ [0, T] is defined by the following SDE: dX_t = f(X_t, t)dt + g(t)dW_t where T>0 is a fixed constant; f(·, ·): ℝ^k× [0, T]→ℝ^k is the drift coefficient; g(·): [0, T]→ℝ_≥ 0 is a predefined diffusion noise scheduler; {W_t}_t∈ [0, T] is the standard Brownian motion in ℝ^k. If we denote the probability density of X_t by p_t(x), then we hope that the distribution of p_T(x) is close to some tractable gaussian distribution π(x). If we set f(X_t, t) ≡ 0 and g(t) = √(2t) as in <cit.>, then we have p_t(x) = q(x) ⊛𝒩(0, t^2I):= q(x; t) where ⊛ denotes the convolution operation and q(x, T)≈π(x) = 𝒩(0, T^2I). <cit.> define the backward process by sampling an initial particle from X_0∼π(x) ≈ p_T(x). Then the backward denoising process {X_t}_t∈ [0, T] is defined by the following SDE: dX_t = [g^2(T-t)∇log p_T-t(X_t) - f(X_t, T-t)]dt + g(T-t)dW_t . It worths mentioning that the above denoising process is a trivial time-reversal to the original backward process defined in <cit.>, which also appears in previous work <cit.> for notational simplicity. In the denoising process [We omit the word “backward” since this process is the time-reversal of the original backward process.], the critical term to be estimated is ∇log p_T-t(x), which is the score function of the forward process at time T-t. p_T-t(x) is often some noise-corrupted version of the target distribution q(x). For example, as discussed before, if we set f(X_t, t) ≡ 0 and g(t) = √(2t), then ∇log p_T-t(x) becomes ∇log q(x, T-t), which can be estimated through denoising score matching <cit.> by a time-indexed network s_θ(x, t) <cit.>. §.§ Generative Adversarial Networks In 2014, <cit.> introduce the seminal work of generative adversarial nets. The GAN's training dynamics can be formulated by a minimax game between a discriminator network d_θ_D(·): 𝒳→ [0, 1] and a generator network G_θ_G(·): 𝒵→𝒳. From an intuitive understanding, the discriminator network is trained to classify images as fake or real while the generator network is trained to produce images from noise that “fool” the discriminator. The above alternate procedure for training generator and discriminator can be formulated as a minimax game: min_θ_Gmax_θ_D𝔼_x∼ q(x)[log d_θ_D(x)] + 𝔼_z∼π(z)[log (1-d_θ_D(G_θ_G(z)) )] where z∼π(z) is sampled from gaussian noise. The training dynamics of GANs is unstable, due to the highly nonconvexity of the generator and the discriminator that hinders the existence and uniqueness of the equilibrium in the minimax objective. Despite lots of progress has been made in the respective field of diffusion models and GANs in recent years, little is known about the connection between them. In the next section, we will provide a general framework that unifies GANs and diffusion models and show that the learning dynamics of GANs and diffusion models can be recovered as a special case of our general framework. § GENERAL FRAMEWORK In order to obtain a unified SDE framework for GANs and SDMs, let us forget the terminologies “forward process” and “backward process” in the literature of diffusion models. We aim to construct a learnable (perhaps stochastic) process {X_t}_t∈ [0, T] indexed by a continuous time variable 0≤ t ≤ T such that X_0∼π(x) for which π(x) is a known Gaussian distribution and X_T ∼ p_T(x) that is “close” to the target distribution q(x). The closeness can be measured by some divergence or metric which we would give details later. Specifically, given X_0∼π(x) from noise distribution, we consider the following evolution equation of X_t∼ p_t(x) for t∈ [0, T] with some T>0: dX_t =[f(X_t, t) + β(t, X_t)∇logq(u(t)X_t; σ(t))/p_t(X_t)+ g^2(t)/2∇log p_t(X_t)]dt + √(g^2(t)-λ^2(t))dW_t where f(·, ·): ℝ^k× [0, T]→ℝ^k; β(·, ·): ℝ^k× [0, T]→ℝ_≥ 0 and u(·), σ(·), g(·), λ(·): [0, T]→ℝ_≥ 0 are all predefined scaling functions. For the ease of presentation, we name the above SDE the “Discriminator Denoising Diffusion Flow” (DiffFlow). It seems hard to understand the physical meaning behind DiffFlow at a glance, we will explain it in detail how the name DiffFlow comes from and how it unifies GANs and SDMs in later subsections: with careful tuning on the scaling functions, we can recover dynamics of GANs and SDMs respectively without changing the marginal distributions p_t(x). Furthermore, we will see that DiffFlow unifies a wider “continuous” spectrum of generative models where GANs and diffusion models are corner cases of DiffFlow with specialized scaling functions. §.§ Single Noise-Level Langevin Dynamics Let us start with the simplest case: the single-noise level Langevin dynamics. In DiffFlow, if we set u(t)≡ 1, f(X_t, t) ≡ 0, λ(t)≡ 0, β(t, X_t) ≡β(t), g(t)≡√(2β(t)) and σ(t)≡σ_0 for some fixed σ_0≥ 0, then we obtain dX_t =β(t)∇log q(X_t; σ_0)dt + √(2β(t))dW_t which is exactly the classical Langevin algorithm. However, as pointed out by <cit.>, the single noise level Langevin algorithm suffers from slow mixing time due to separate regions of data manifold. Hence, it cannot model complicated high dimensional data distribution: it even fails to learn and generate reasonable features on MNIST datasets <cit.>. Hence, it is reasonable to perturb data with multiple noise levels and perform multiple noise-level Langevin dynamics, such as the Variance Explosion SDE with Corrector-Only Sampling (i.e., NCSN) <cit.>. §.§ Score-based Diffusion Models §.§.§ Variance Explosion SDE Without loss of generality, we consider the Variance Explosion (VE) SDE adopted in <cit.>. DiffFlow can be easily adapted to other VE SDEs by simply changing the noise schedule σ(t) and scaling β(t, X_t). We can set u(t)≡ 1, f(X_t, t) ≡ 0, β(t, X_t) ≡ 2(T-t), λ(t)≡√(β(t)) = √(T-t), g(t)≡√(2β(t)) = 2√(T-t) and σ(t)≡ T-t, then we obtain dX_t =2(T-t)∇log q(X_t; T-t)dt +√(2(T-t)) dW_t which is the VE SDE in <cit.>. For a general VE SDE in <cit.>, the denoising process is dX_t = d[σ^2(T-t)]/dt∇log q(X_t;√(σ^2(T-t)-σ^2(0))) + √(d[σ^2(T-t)]/dt)dW_t which can be obtained by setting u(t)≡ 1, f(X_t, t) ≡ 0, β(t, X_t) ≡d[σ^2(T-t)]/dt, λ(t)≡√(β(t, X_t)) = √(d[σ^2(T-t)]/dt), g(t)≡√(2β(t, X_t)) = √(2d[σ^2(T-t)]/dt) and σ(t)≡√(σ^2(T-t)-σ^2(0)) in DiffFlow. §.§.§ Variance Preserving SDE Similar to previous analysis, for Variance Preserving (VP) SDE in <cit.>, the denoising process is dX_t = [β(T-t)∇log p_T-t(X_t) +1/2β(T-t)X_t]dt + √(β(T-t))dW_t . Now we need to represent the score of the forward process ∇log p_t(X_t) in VP SDE by the score of the noise-corrupted target distribution. We have the following proposition, which can be obtained by simple application of stochastic calculus. Assume X_0∼ q(x) and the forward process {X_t}_t∈ [0, T] in VP SDE is given by dX_t = -1/2β(t)X_tdt + √(β(t))dW_t . Then the score of p_t(x) (note: X_t∼ p_t(x)) can be represented as ∇log p_t(x) = ∇log q( exp(1/2∫_0^tβ(s)ds) x; 1- exp(-∫_0^tβ(s)ds)) . By ito lemma, we have d[exp(1/2∫_0^tβ(s)ds)X_t] = exp(1/2∫_0^tβ(s)ds)dX_t + 1/2β(t)exp(1/2∫_0^tβ(s)ds)X_tdt . Combining with the forward VP SDE, we have d[exp(1/2∫_0^tβ(s)ds)X_t] = exp(1/2∫_0^tβ(s)ds) √(β(t))dW_t . Hence, X_t = exp(-1/2∫_0^tβ(s)ds)[ X_0 + ∫_0^texp(1/2∫_0^uβ(s)ds) √(β(u))dW_u ] . By the ito isometry and martingale property of brownian motions, we have 𝔼[∫_0^texp(1/2∫_0^uβ(s)ds) √(β(u))dW_u] = 0 and 𝔼[∫_0^texp(1/2∫_0^uβ(s)ds) √(β(u))dW_u]^2 = ∫_0^t[exp(1/2∫_0^uβ(s)ds) √(β(u))]^2du = ∫_0^texp(∫_0^uβ(s)ds) β(u)du = ∫_0^texp(∫_0^uβ(s)ds) d [∫_0^uβ(s)du ] = exp(∫_0^tβ(s)ds) -1 . Hence exp(-1/2∫_0^tβ(s)ds) ∫_0^texp(1/2∫_0^uβ(s)ds) √(β(u))dW_u ∼𝒩(0, I- exp(-∫_0^tβ(s)ds)I ) . Then we have p_t(x) = exp(1/2∫_0^tβ(s)ds)q(exp(1/2∫_0^tβ(s)ds) x) ⊛𝒩(0, I- exp(-∫_0^tβ(s)ds)I ) where the proof ends by taking logarithm and applying the divergence operator ∇ on both sides. Now, we can obtain the denoising process of the VP SDE as follows, dX_t = [β(T-t) ∇log q( exp(1/2∫_0^T-tβ(s)ds) X_t; 1- exp(-∫_0^T-tβ(s)ds)) +1/2β(T-t)X_t]dt + √(β(T-t))dW_t which can be obtained by setting u(t)≡exp(1/2∫_0^T-tβ(s)ds), f(X_t, t) ≡1/2β(T-t)X_t, β(t, X_t) ≡β(T-t), λ(t)≡√(β(T-t)), g(t)≡√(2β(T-t)) and σ(t)≡ 1- exp(-∫_0^T-tβ(s)ds) in DiffFlow. Similar procedure can show that sub-VP SDE proposed in <cit.> is also lied in the DiffFlow framework with specialized scaling functions, we leave it as a simple exercise for readers. §.§.§ Diffusion ODE Flow Similar to previous analysis, for the diffusion ODE corresponds to VE SDE in <cit.>, the denoising process is dX_t = 1/2d[σ^2(T-t)]/dt∇log q(X_t;√(σ^2(T-t)-σ^2(0))) which can be obtained by setting u(t)≡ 1, f(X_t, t) ≡ 0, β(t, X_t) ≡1/2d[σ^2(T-t)]/dt, λ(t)≡√(2β(t, X_t)) = √(2d[σ^2(T-t)]/dt), g(t)≡√(2β(t, X_t)) = √(2d[σ^2(T-t)]/dt) and σ(t)≡√(σ^2(T-t)-σ^2(0)) in DiffFlow. The ODEs correspond to VP SDEs and sub-VP SDEs can be obtained from DiffFlow by specilizing scaling functions with nearly the same procedure, hence we omit the derivations. §.§ Generative Adversarial Networks §.§.§ The Vanilla GAN To start with the simplest case, let us show how DiffFlow recovers the training dynamics of the vanilla GAN <cit.>. First, let us set u(t)≡ 1, f(X_t, t) ≡ 0, λ(t)≡ 0, g(t)≡ 0 and σ(t)≡σ_0≥ 0. Notice that σ_0 is usually set to be a small positive constant to ensure the smoothness of the gradient for the generator. Then, the DiffFlow reduces to the following DiffODE: dX_t =[ β(t, X_t)∇logq(X_t;σ_0)/p_t(X_t)]dt  . Next, we show that by an extreme coarse approximating to the dynamics of this ODE by a generator network with specialized β(t, X_t) , one recovers the dynamics of the Vanilla GAN. Note that the critical term to be estiamted in DiffODE is ∇logq(X_t;σ_0)/p_t(X_t), which is the gradient field of the classifier between the real data and generated data at time t. This term can be estimated by taking gradients to the logistic classifier defined as follows: D_t(x) := logq(x;σ_0)/p_t(x) = min_D[𝔼_x∼ q(x;σ_0)log(1+e^-D(x)) + 𝔼_x∼ p_t(x)log(1+e^D(x)) ] Then we can update the samples by X_t+1 = X_t + η_tβ(t, X_t) ∇ D_t(X_t) where η_t>0 is the discretization stepsize. Hence, DiffODE naturally yield the following algorithm and we can show this algorithm is equivalent to GAN by setting the discriminator loss to logistic loss: d_θ_D(x) = 1/1+e^-D_θ_D(x). For the vanilla GANs <cit.>, the update of discriminator is the same as DiffODE-GANs. Since from d_θ_D(x) = 1/1+e^-D_θ_D(x), we have 𝔼_x∼ q(x;σ_0)log(1+e^-D_θ_D(x)) + 𝔼_x∼ p_t(x)log(1+e^D_θ_D(x)) = 𝔼_x∼ q(x;σ_0)log(1+e^-logd_θ_D(x)/1-d_θ_D(x)) + 𝔼_x∼ p_t(x)log(1+e^logd_θ_D(x)/1-d_θ_D(x)) = -𝔼_x∼ q(x;σ_0)log(d_θ_D(x)) - 𝔼_x∼ p_t(x)log(1-d_θ_D(x))  . It remains to show that the update of the generator is also equivalent. The gradient of the generator is ∇_θ_Glog(1-d_θ_D(G_θ_G(z))) = - ∇_θ_Glog(1+e^D_θ_D(G_θ_G(z))) = - 1/1+e^-D_θ_D(G_θ_G(z))∇_θ_GD_θ_D(G_θ_G(z)) = - d_θ_D(G_θ_G(z)) ∇_θ_GD_θ_D(G_θ_G(z)) = - d_θ_D(G_θ_G(z)) ∇ D_θ_D(G_θ_G(z))·∇_θ_GG_θ_G(z) . Hence, at the time step t-1, we obtain the discriminator with parameter θ_D^t-1 and update generator by the following equation θ_G^t = θ_G^t-1 + λ_t [1/n∑_i=1^nd_θ_D^t-1(G_θ_G^t-1(z_i)) ∇ D_θ_D^t-1(G_θ_G^t(z_i))·∇_θ_GG_θ_G^t-1(z) ] where z_i∼𝒩(0, I) and λ_t is the learning rate for mini-batch SGD at time t. If we instead run a gradient descent step on the MSE loss of the generator in the DiffFlow-GAN, we obtain θ_G^t = θ_G^t-1 + λ_t[1/n∑_i=1^nη_t β(z_i, t)∇ D_θ_D^t(G_θ_G^t(z_i))·∇_θ_GG_θ_G^t(z_i) ] . Then the equivalence can be shown by setting η_t β(z_i, t) = d_θ_D^t-1(G_θ_G^t-1(z_i)) . In practice, the vanilla GAN faces the problem of gradient vanishing on the generator update. A common trick applied is to use the “non-saturating loss”, i.e., the generator update is by instead minimizing -𝔼_z∼π(z)[log(d_θ_D(G_θ_G(z)))]. Hence, the gradient of the generator is -∇_θ_Glog(d_θ_D(G_θ_G(z))) = ∇_θ_Glog(1+e^-D_θ_D(G_θ_G(z))) = - e^-D_θ_D(G_θ_G(z))/1+e^-D_θ_D(G_θ_G(z))∇_θ_GD_θ_D(G_θ_G(z)) = -(1- d_θ_D(G_θ_G(z)) )∇_θ_GD_θ_D(G_θ_G(z)) = -(1- d_θ_D(G_θ_G(z)) )∇ D_θ_D(G_θ_G(z))·∇_θ_GG_θ_G(z) . Similarly, with the discriminator parameter θ_D^t-1, we can update generator by the following equation θ_G^t = θ_G^t-1 + λ_t [1/n∑_i=1^n(1-d_θ_D^t-1(G_θ_G^t-1(z_i))) ∇ D_θ_D^t-1(G_θ_G^t(z_i))·∇_θ_GG_θ_G^t-1(z_i) ] Then the equivalence can be shown by setting η_t β(z_i, t) = 1-d_θ_D^t-1(G_θ_G^t-1(z_i)) . The DiffFlow-GANs formulation provides a more intuitive explanation on why non-saturating loss can avoid vanishing gradients for a poor-trained generator: if at time t-1, we have a poor generator G_θ_G^t-1(z), generating poor samples that are far from the real data; then d_θ_D^t-1(G_θ_G^t-1(z_i)) would close to 0, which would lead to zero particle update gradient for original GANs while the “non-saturating” loss can avoid this problem. In practice, one can avoid the gradient vanishing for DiffFLow-GANs by two methods: either by setting β(z_i, t)≡ 1 to maintain the gradient for particle updates; or proposing a noising annealing strategy for the discriminator: during the early stage of training, the discriminator is weakened by classifying a noise-corrupted target distribution q(x; σ(t)) from fake data p_t(x). The weakening discriminator trick has been adopted in many real deployed GAN models, and it has been shown helpful during the early stage of GAN training <cit.>. Since the noise annealing on discriminator shares some spirits with SDMs, we will discuss in details later. §.§.§ Three Improvements on Vanilla GANs From previous analysis, we show that the DiffFlow framework provides a novel view on GANs and has potential for several improvements on the vanilla GANs algorithm. The learning dynamics of vanilla GANs are coarse approximation of DiffODE: the one-step gradient of the generator is determined by the particle movements driven by DiffODE, and the driven force is exactly the gradient field of the logistic classifier between the real data and fake data (i.e., discriminator). Furthermore, for the vanilla GAN, the particle gradient is scaled by the probability of the particle being real data, which would be near zero at the early stage of training — this is exactly the source of gradient vanishing. From this perspective, we can obtain the following improvements: simplify β(t, X_t)≡β(t), i.e., eliminating the dependence of the particle movement on the scaling factor that determined by the probability of the particle being real. This would alleviate the gradient vanishing at the early stage of training. Furthermore, since the vanilla GAN only approximates the particle movements by one-step gradients of the least square regression, the is too coarse to simulate the real dynamics of DiffODE. Indeed, one could directly composite a generator by the gradient fields of discriminators at each time step t and this generator could directly simulate the original particle movements in the DiffODE. The idea can be implemented by borrowing ideas from diffusion models: we adopt a time-indexed neural network discriminator d_θ_t(x, t) that is trained by classifying real and fake data at time t. Lastly, since at the early stage of training, the generator could face too much pressure with a “smart” discriminator, the transition and training dynamics between noise to data could be sharp and unstable during the early stage of training. To achieve a smooth the transition between noise to data, we borrow again ideas from diffusion models: we adopt a the noise annealing strategy σ(t) that weakening the discriminator. At time t, the discriminator d_θ_t(x, t) learns to classify between noise-corrupted real data q(x;σ(t)) and fake data p_t(x) where the corruption σ(t) is continuously decreasing as time index increasing with σ(0)= σ_max and σ(T)= σ_min. This idea is analogous to diffusion models such as NCSN <cit.> and the only difference is that the diffusion models learn the score of noise corrupted target distribution q(x;σ(t)) instead of a classifier at the time index t. With above three improvements inspired from the perspective of ODE approximations and diffusion models, we propose an improved GAN algorithm. The training and sampling procedure is described as follows. Although we adopt the method of training discriminators θ_t^* independently across time steps in the algorithm's pseudocode, since it avoids the slow convergence that is partly due to conflicting optimization directions between different time steps <cit.>. It worths mentioning that our framework offers much more flexibility on designing the time-indexed discriminator: we can either share a universal θ across all time t as done in diffusion models, or train discriminators θ_t^* independently for each t. It remains an open problem on which method is better for such generative models. §.§ A Unified SDE Framework In previous sections, we have shown that by specializing scaling functions of DiffFlow, we can recover dynamics of the single-noise Langevin algorithm, diffusion models, and GANs. In fact, DiffFlow offers a much wider continuous spectrum of generative models where GANs and SDMs are corner-cases on the spectrum, as shown in Figure <ref>. §.§.§ DiffFlow Decomposition Recall that the DiffFlow dynamics can be described as the following SDE[For the ease of presentation, we remove the dependence of β(t, X_t) on X_t.]: dX_t =[f(X_t, t)_Regularization +β(t)∇logq(u(t)X_t; σ(t))/p_t(X_t)_Discriminator+ g^2(t)/2∇log p_t(X_t)_Denoising]dt + √(g^2(t)-λ^2(t))dW_t_Diffusion . We name each term in DiffFlow SDE according to its physical meaning to the particle evolutions: the first term f(X_t, t) is usually a linear function of X_t that acts similarly to weight decay for particle updates; the second term is the gradient of the classifier between target data and real data, hence it is named discriminator; the third term is named denoising, since it removes gaussian noise of standard deviation g(t) according to the Kolmogorov Forward Equation; the last term is obviously the diffusion. While the above explanation is obvious in physical meanings, it it hard to explain the continuous evolution of models between GANs and SDMs by changing the scaling functions g(·) and λ(·). Notice that when g^2(t)≤β(t), DiffFlow can also be written as follows: dX_t =[g^2(t)/2∇log q(u(t)X_t; σ(t))]dt + √(g^2(t)-λ^2(t))dW_t_SDMs + [(β(t)-g^2(t)/2)∇logq(u(t)X_t; σ(t))/p_t(X_t)+f(X_t, t) ]_Regularized GANsdt . The above decomposition implies that when g^2(t)≤ 2β(t), DiffFlow can be seen as a mixed particle dynamics between GANs and SDMs. The relative mixture weight is controlled by g(t) and β(t). If we fix β(t), then the GANs dynamics would vanish to zero as g(t) increase to √(2β(t)). In the limit of g(t)=√(2β(t)) and λ(t)≡ 0, DiffFlow reduces to the pure Langevin algorithm; in the limit of g(t)≡ 0 and λ(t)≡ 0, DiffFlow reduces to the pure GANs algorithm. As for the evolution from the pure Langevin algorithm to the diffusion SDE models, we need to increase λ(t) from 0 to g(t)/√(2) to match the stochasticity of VP/VE SDE. If we further increase λ(t) to g(t), we would obtain the diffusion ODE <cit.>. §.§.§ Stochastic Langevin Churn Dynamics (SLCD) If g(·) to g^2(t)>2β(t), the GAN component would vanish and we would obtain a Langevin-like algorithm described by the following SDE: dX_t = [f(X_t, t) + β(t)∇log q(u(t)X_t; σ(t))]_Regularized Score Dynamicsdt + ( g^2(t)/2- β(t))∇log p_t(X_t)_Denoisingdt + √(g^2(t)-λ^2(t))dW_t_Diffusion . We name the above SDE dynamics the Stochastic Langevin Churn Dynamics (SLCD) where we borrow the word “churn” from the section 4 of the EDM paper <cit.>, which describes a general Langevin-like process of adding and removing noise according to diffusion and score matching respectively. The SDE dynamics described above is to some sense exactly the Langevin-like churn procedure in the equation (7) of <cit.>: the particle is first raised by a gaussian noise of standard deviation √(g^2(t)-λ^2(t)) and then admits a deterministic noise decay of standard deviation of √(g^2(t)-2β(t)). This is where the name Stochastic Langevin Churn Dynamics comes from. §.§.§ Analytic Continuation of DiffFlow From previous analysis, we know that the scaling function g(·) controls the proportion of GANs component and λ(·) controls the stochasticity and aligns the noise level among Langevin algorithms, diffusion SDEs, and diffusion ODEs. We will show later that the change of g(t) would not affect the marginal distributions p_t(x) and only λ(·) plays a critical role in controlling the stochasticity of particle evolution. Further more, we could extend to λ^2(t)<0 to enable more stochasticity than Langevin algorithms: by letting λ(t) =√(-1)λ(t), we can obtain the following analytic continuation of DiffFlow on λ(t): dX_t =[f(X_t, t)_Regularization +β(t)∇logq(u(t)X_t; σ(t))/p_t(X_t)_Discriminator+ g^2(t)/2∇log p_t(X_t)_Denoising]dt + √(g^2(t)+λ^2(t))dW_t_Diffusion . As shown in Figure <ref>, the analytic continuation area is marked as grey that enables DiffFlow achieves arbitrarily level of stochasticity by controlling λ(t) wisdomly. §.§.§ Diffusion-GAN: A Unified Algorithm Notice that when 0<g(t)<√(2β(t)), the DiffFlow admits the following mixed particle dynamics of SDMs and GANs that we name Diffusion-GANs, dX_t =[g^2(t)/2∇log q(u(t)X_t; σ(t))]dt + √(g^2(t)-λ^2(t))dW_t_SDMs + [(β(t)-g^2(t)/2)∇logq(u(t)X_t; σ(t))/p_t(X_t)+f(X_t, t) ]_Regularized GANsdt . One can implement Diffusion-GANs under DiffFlow framework by learning a time-indexed discriminator using logistic regression d_θ_t^*(x, t)≈logq(u(t)X_t; σ(t))/p_t(X_t) and a score network using score mathcing s_θ'_*(x, t)≈∇log q(u(t)X_t; σ(t)) . Then the sampling process is defined by discretizing the following SDE within interval t∈ [0, T]: dX_t =[g^2(t)/2s_θ'_*(x, t) ]dt + √(g^2(t)-λ^2(t))dW_t + [(β(t)-g^2(t)/2)∇ d_θ_t^*(x, t) +f(X_t, t) ]dt . Notice that we should tune λ(t) to achieve a good stochasticity. It remains an open problem empirically and theoretically that how to design an optimal noise level to achieve the best sample generation quality. §.§.§ Marginal Preserving Property One may want to argue that the current framework of DiffFlow is just a simple combination of particle dynamics of GANs and SDMs respectively, where g(·) acts as an interpolation weight between them. It is already known in the literature that both GANs <cit.> and SDMs <cit.> can be modelled as particle dynamics with different ordinary or stochastic differential equations. We want to argue that despite both GANs and SDMs can be modelled as ODE/SDE with different drift and diffusion terms, the dynamics behind these differential equations are different: even with the same initializations of particles, the path measure would deviate significantly for GANs and SDMs. Hence, a simple linear interpolation between dynamics of GANs and SDMs lacks both theoretic and practical motivations and we need sophisticated design of each term in the differential equations to align the marginal distribution of different classes of generative models lied in SDMs and GANs. This work provides a reasonable design of a unified SDE “DiffFlow” for GANs and SDMs that enables flexible interpolations between them by changing the scaling functions in the SDE. Furthermore, we would show in the following proposition that the interpolation is “smooth” from GANs to SDMs: the marginal distribution p_t(x) for all t≥ 0 would remain invariant to the interpolation factor g(·). The marginal distribution p_t(x) of DiffFlow dX_t =[f(X_t, t) + β(t)∇logq(u(t)X_t; σ(t))/p_t(X_t)+ g^2(t)/2∇log p_t(X_t)]dt + √(g^2(t)-λ^2(t))dW_t . remains invariant to g(·). By the Kolmogorov Forward Equation <cit.>, the marginal distribution p_t(x) follows the following PDE: ∂ p_t(x)/∂ t = -∇·[p_t(x)(f(x, t) + β(t)∇logq(u(t)x; σ(t))/p_t(x)+ g^2(t)/2∇log p_t(x))] +g^2(t)-λ^2(t)/2∇·∇ p_t(x) = -∇·[p_t(x)(f(x, t) + β(t)∇logq(u(t)x; σ(t))/p_t(x)]) - ∇·[p_t(x)g^2(t)/2∇log p_t(x)] +g^2(t)-λ^2(t)/2∇·∇ p_t(x) = -∇·[p_t(x)(f(x, t) + β(t)∇logq(u(t)x; σ(t))/p_t(x)]) -λ^2(t)/2∇·∇ p_t(x)  . Hence, the marginal distribution p_t(x) is independent of g(·). The Marginal Preserving Property implies that if we increase g(·), the DiffFlow evolutes from GANs to SDMs and further to SLCD as shown in the Figure <ref>, while the marginal distribution p_t(x) remains invariant. Another important factor λ(·) is used to control the stochasticity of DiffFlow to align the noise level among Langevin Dynamics, diffusion SDEs, and diffusion ODEs. § CONVERGENCE ANALYSIS For the ease of presentation, without loss of generality, we set f(X_t, t) ≡ 0, β(t, X_t)≡ 1, u(t)≡ 1, σ(t)≡σ_0 for some α, σ_0>0 and λ(t)≡ 0, then the DiffFlow becomes the following SDE: dX_t =[∇logq(X_t; σ_0)/p_t(X_t)+ g^2(t)/2∇log p_t(X_t)]dt + g(t)dW_t . The current simplified DiffFlow is general enough to incorporate the original DiffFlow's full dynamics: by rescaling the particles and gradients, we can recover general f(X_t, t), β(t), and u(t), as done in the DDIM <cit.>. The general λ(t) and σ(t) can be recovered by changing the noise scheduler from constants to the desired annealing scheduler. In order to study the convergence of DiffFlow dynamics, we need to find its variational formulation, i.e., a functional that DiffFlow minimizes. Finding such functional is not hard, as shown in the following lemma, which implies the functional is exactly the KL divergence between the generated and target distributions. Given stochastic process {X_t}_t≥ 0 and its dynamics determined by dX_t =[∇logq(X_t; σ_0)/p_t(X_t)+ g^2(t)/2∇log p_t(X_t)]dt + g(t)dW_t with X_0∼π(x) and σ_0, λ_0>0. Then the marginal distribution p_t(x) of X_t minimizes the following functional L(p) = KL( p q(x;σ_0)) := ∫_ℝ^kp(x)logp(x)/q(x;σ_0)dx  . Furthermore, ∂ L(p_t)/∂ t = - ∫_ℝ^kp_t(x)∇logp_t(x)/q(x; σ_0)_2^2dx . By the Kolmogorov Forward Equation, the marginal distribution p_t(x) follows the following PDE: ∂ p_t(x)/∂ t = -∇·[p_t(x)( ∇logq(x; σ_0)/p_t(x)+ g^2(t)/2∇log p_t(x))] +g^2(t)/2∇·∇ p_t(x) = -∇·[p_t(x)(∇logq(x; σ_0)/p_t(x)+ g^2(t)/2∇log p_t(x))] +g^2(t)/2∇·[p_t(x)∇log p_t(x)] = -∇·[p_t(x)( ∇logq(x; σ_0)/p_t(x)+ g^2(t)/2∇log p_t(x) -g^2(t)/2∇log p_t(x) )] = -∇·[p_t(x)( ∇logq(x; σ_0)/p_t(x))] . Then, we have ∂ L(p_t)/∂ t = ∫_ℝ^k[logp_t(x)/q(x;σ_0) +1] ∂ p_t(x)/∂ tdx = ∫_ℝ^klogp_t(x)/q(x;σ_0)∂ p_t(x)/∂ tdx = - ∫_ℝ^klogp_t(x)/q(x;σ_0)∇·[p_t(x)( ∇logq(x; σ_0)/p_t(x))]dx = ∫_ℝ^klogp_t(x)/q(x;σ_0)∇·[p_t(x)( ∇logp_t(x)/q(x;σ_0))]dx Through integral by parts, we have ∂ L(p_t)/∂ t =- ∫_ℝ^kp_t(x)( ∇logp_t(x)/q(x;σ_0))·( ∇logp_t(x)/q(x;σ_0)) dx =- ∫_ℝ^kp_t(x)∇logp_t(x)/q(x;σ_0)_2^2dx . Hence, the KL divergence L(p_t) is decreasing along the marginal distribution path {p_t(x)}_t≥ 0 determined by DiffFlow. §.§ Asymptotic Optimality via Poincare Inequality The main tool to prove the asymptotic convergence of DiffFlow is the following Gaussian Poincare inequality from <cit.>. Suppose f: ℝ^k→ℝ is a smooth function and X is a multivariate Gaussian distribution X∼𝒩(0, σ^2I) where I∈ℝ^k× k. Then Var[f(X)] ≤σ^2𝔼[∇ f(X)_2^2] . We also show that under gaussian smoothing, the log-density obeys quadratic growth. Given a probability distribution q(x) and let q(x;σ) be the distribution of x+ϵ where x∼ q(x) and ϵ∼𝒩(0, σ^2 I). Then there exists some constants A_σ, B_σ>0 and C_σ such that for any 0<γ<1 |log q(x;σ)|≤ A_σx_2^2+B_σx_2+C_σ where C_q(γ) :=inf{s : ∫_B(s)q(u)du≥γ}, A_σ =1/2σ^2, B_σ=C_p(γ)/σ^2 and C_σ=max{C^2_p(γ)/2σ^2- log(γ1/(2π)^k/2σ^k), log( 1/(2π)^k/2σ^k) } , B(s)={x∈ℝ^k: x_2≤ s} . Let G_σ(x) be the probability density function of N(0, σ^2 I), then the resulting smoothed distribution q(x;σ) is q(x;σ) = ∫_ℝ^dq(u)G_σ(x-u)du . Let B(r)={u: u_2≤ r}, then q(x;σ) =∫_B(r)q(u)G_σ(x-u)du + ∫_ℝ^k∖ B(r)q(u)G_σ(x-u)du ≥∫_B(r)q(u)G_σ(x-u)du ≥∫_B(r)q(u)G_σ(x+rx/x_2)du =G_σ(x+rx/x_2)∫_B(r)q(u)du . Fix some small constant 0<γ<1, if we choose r= C_p(γ) :=inf{s : ∫_B(s)p(u)du≥γ}. This implies q(x;σ) ≥γ G_σ(x+C_p(γ)x/x_2) =γ1/(2π)^k/2σ^kexp(- x+C_p(γ)x/x_2_2^2/2σ^2) , Taking logarithm on both sides of (<ref>), we obtain log q(x;σ)≥log(γ1/(2π)^k/2σ^k)- x+C_p(γ)x/x_2_2^2/2σ^2 = -1/2σ^2x_2^2 -C_p(γ)/σ^2x_2-C^2_p(γ)/2σ^2+ log(γ1/(2π)^k/2σ^k) . We also have q(x;σ) = ∫_ℝ^kq(u)G_σ(x-u)du ≤∫_ℝ^kq(u)G_σ(0)du = G_σ(0) = 1/(2π)^k/2σ^k . Therefore, log q(x;σ)≤log( 1/(2π)^k/2σ^k). Let A_σ =1/2σ^2, B_σ=C_p(γ)/σ^2 and C_σ=max{C^2_p(γ)/2σ^2- log(γ1/(2π)^k/2σ^k), log( 1/(2π)^k/2σ^k) }, then |log q(x;σ)|≤ A_σx_2^2+B_σx_2+C_σ . In the above Lemma, for the measure q, we define the quantity C_q(γ) :=inf{s : ∫_B(s)q(u)du≥γ} . The meaning of this quantity C_p(γ) is the smallest ball centred at the origin of ℝ^k that captures at least γ mass of the probability measure q. Now, we are ready to prove the asymptotic convergence theorem. Given stochastic process {X_t}_t≥ 0 and its dynamics determined by dX_t =[∇logq(X_t; σ_0)/p_t(X_t)+ g^2(t)/2∇log p_t(X_t)]dt + g(t)dW_t with X_0∼π(x) and σ_0, λ_0>0. Then the marginal distribution X_t∼ p_t(x) converges almost everywhere to the target distribution q(x;σ_0), i.e., lim_t→∞p_t(x) = q(x;σ_0) a.e. By Lemma <ref>, ∂ L(p_t)/∂ t =- ∫_ℝ^kp_t(x)∇logp_t(x)/q(x;σ_0)_2^2dx. Hence, by the nonnegativity of KL divergence, lim_t→∞∫_ℝ^kp_t(x)∇logp_t(x)/q(x;σ_0)_2^2dx = 0 . Furthermore, since ∇√(f(x))_2^2 = ∇ f(x)_2^2/4f(x) = f(x)/4∇log f(x)_2^2 we have ∫_ℝ^kp_t(x)∇logp_t(x)/q(x; σ_0)_2^2dx = 4∫_ℝ^kq(x;σ_0)∇√(p_t(x)/q(x; σ_0))_2^2dx = 4∫_ℝ^kexp(log q(x;σ_0))∇√(p_t(x)/q(x; σ_0))_2^2 ≥ 4∫_ℝ^kexp(-A_σ_0x_2^2-B_σ_0x_2-C_σ_0))∇√(p_t(x)/q(x; σ_0))_2^2dx  . Since x_2≤x_2^2+1, we have ∫_ℝ^kp_t(x)∇logp_t(x)/q(x; σ_0)_2^2dx ≥ 4∫_ℝ^kexp(-(A_σ_0+B_σ_0)x_2^2-C_σ_0-1)∇√(p_t(x)/q(x; σ_0))_2^2dx = 4exp(-C_σ_0-1)∫_ℝ^kexp(-x_2^2/(A_σ_0+B_σ_0)^-1) ∇√(p_t(x)/q(x; σ_0))_2^2dx = 4(π/A_σ_0+B_σ_0)^k/2exp(-C_σ_0-1)∫_ℝ^k𝒩(x; 0, 1/2(A_σ_0+B_σ_0)I)∇√(p_t(x)/q(x; σ_0))_2^2dx =4(π/A_σ_0+B_σ_0)^k/2exp(-C_σ_0-1)𝔼_x∼𝒩(x; 0, 1/2(A_σ_0+B_σ_0)I) (∇√(p_t(x)/q(x; σ_0))_2^2) ≥ 8(A_σ_0+B_σ_0)(π/A_σ_0+B_σ_0)^k/2exp(-C_σ_0-1)Var_x∼𝒩(x; 0, 1/2(A_σ_0+B_σ_0)I) ( √(p_t(x)/q(x; σ_0)))  . where the last inequality is due to Gaussian Poincare inequality. Hence, we obtain lim_t→∞Var_x∼𝒩(x; 0, 1/2(A_σ_0+B_σ_0)I) ( √(p_t(x)/q(x; σ_0))) = 0 . Furthermore, by previous analysis on the lower bound of exp(log q(x;σ_0)), we have exp(log q(x;σ_0))≥ D_σ_0𝒩(x; 0, 1/2(A_σ_0+B_σ_0)I) . where D_σ_0:=4(π/A_σ_0+B_σ_0)^k/2exp(-C_σ_0-1) . Then 𝔼_x∼𝒩(x; 0, 1/2(A_σ_0+B_σ_0)I) (√(p_t(x)/q(x; σ_0))) = ∫_ℝ^k𝒩(x; 0, 1/2(A_σ_0+B_σ_0)I)√(p_t(x)/q(x; σ_0))dx ≤1/√(D_σ_0)∫_ℝ^k√(𝒩(x; 0, 1/2(A_σ_0+B_σ_0)I))√(p_t(x))dx ≤1/√(D_σ_0)∫_ℝ^k𝒩(x; 0, 1/2(A_σ_0+B_σ_0)I)dx∫_ℝ^kp_t(x)dx ≤1/√(D_σ_0) . Hence, lim_t→∞√(p_t(x)/q(x; σ_0)) = const. ≤1/√(D_σ_0)<∞ This implies lim_t→∞p_t(x) = q(x;σ_0) a.e. §.§ Nonasymptotic Convergence Rate via Log-Sobolev Inequality In the previous section, we have proved the asymptotic optimality of DiffFlow. In order to obtain an explicit convergence rate to the target distribution, we need stronger functional inequalities, i.e., the log-Sobolev inequality <cit.>. For a smooth function g: ℝ^k→ℝ, consider the Sobolev space defined by the weighted L^2 norm: g_L^2(q) = ∫_ℝ^k g(x)^2q(x)dx. We say q(x) satisfies the log-Sobolev inequality with constant ρ>0 if the following inequality holds for any ∫_ℝ^kg(x)q(x)=1, ∫_ℝ^k g(x)log g(x)· q(x)dx≤2/ρ∫_ℝ^k∇√(g(x))_2^2q(x)dx . Then we can obtain a linear convergence of marginal distribution p_t(x) to the smoothed target distribution q(x;σ_0). The analysis is essentially the same as Langevin dynamics as in <cit.>, since their marginal distribution shares the same Fokker-Planck equation. Given stochastic process {X_t}_t≥ 0 and its dynamics determined by dX_t =[∇logq(X_t; σ_0)/p_t(X_t)+ g^2(t)/2∇log p_t(X_t)]dt + g(t)dW_t with X_0∼π(x) and σ_0, λ_0>0. If the smoothed target distribution q(x;σ_0) satisfies the log-Sobolev inequality with constant ρ_q>0. Then the marginal distribution X_t∼ p_t(x) converges linear to the target distribution q(x;σ_0) in KL divergence, i.e., KL(p_t(x)q(x;σ_0)) ≤exp(-2ρ_q t) KL(π(x)q(x;σ_0)). From Lemma <ref>, we obtain ∂ L(p_t)/∂ t=∂/∂ tKL(p_t(x)q(x;σ_0)) = - ∫_ℝ^kp_t(x)∇logp_t(x)/q(x; σ_0)_2^2dx . Since q(x;σ_0) satisfies the log-Sobolev inequality with constant ρ_q>0, by letting the test function g(x) = p_t(x)/q(x;σ_0), we obtain KL(p_t(x)q(x;σ_0)) ≤2/ρ_q∫_ℝ^k∇√(p_t(x)/q(x;σ_0))_2^2q(x;σ_0)dx . Since we already know from previous analysis that ∇√(f(x))_2^2 = f(x)/4∇log f(x)_2^2 we have KL(p_t(x)q(x;σ_0)) ≤1/2ρ_q∫_ℝ^kp_t(x)∇p_t(x)/q(x;σ_0)_2^2q(x)dx = - 1/2ρ_q∂/∂ tKL(p_t(x)q(x;σ_0)) . Hence by Grönwall's inequality, KL(p_t(x)q(x;σ_0)) ≤exp(-2ρ_q t) KL(π(x)q(x;σ_0)). Different from asymptotic analysis in the previous subsection that holds for general q(x), the convergence rate is obtained under the assumption that the smoothed target q(x;σ_0) satisfies the log-Sobolev inequalities. This is rather a strong assumption, since we need control the curvature lower bound of smoothed energy function U(x; q; σ_0) to satisfy the Lyapunov conditions for log-Sobolev inequality <cit.>. This condition holds for some simple distribution: if q(x)∼𝒩(μ_q, σ_q^2I), then the Hessian of its smoothed energy function U(x; q; σ_0)=-∇^2log q(x;σ_0) = (σ_0^2+σ_q^2)^-1I, then by Bakery-Emery criteria <cit.>, we have ρ_q = (σ_0^2+σ_q^2)^-1 . However, for a general target q(x), obtaining the log-Sobolev inequality is relatively hard. If we still want to obtain an explicit convergence rate, we can seek for an explicit regularization f(t, X_t) on the DiffFlow dynamics to restrict the path measure in a smaller subset and then the explicit convergence rate can be obtained by employing uniform log-Sobolev inequalities along the path measure <cit.>. The detailed derivation of such log-Sobolev inequalities under particular regularization f(t, X_t) on the path measure is beyond the scope of this paper and we leave it as future work. § MAXIMAL LIKELIHOOD INFERENCE Recall that the dynamics of DiffFlow is described by the following SDE on the interval t∈ [0, T] with X_0∼π(x) and X_t∼ p_t(x): dX_t =[f(X_t, t) + β(t, X_t)∇logq(u(t)X_t; σ(t))/p_t(X_t)+ g^2(t)/2∇log p_t(X_t)]dt + √(g^2(t)-λ^2(t))dW_t . At the end time step T, we obtain p_T(x) and we hope this is a “good” approximation to the target distribution. Previous analysis shows that as T to infinity, p_T(x) would converge to the target distribution q(x;σ_0) under simplified dynamics. However, the convergence rate is relatively hard to obtain for general target distribution with no isoperimetry property. In this section, we define the goodness of approximation of p_T(x) from another perspective: maximizing the likelihood at the end of time. There are already many existing works on analyzing the likelihood dynamics of diffusion SDE, for instance <cit.>. These works provide a continuous version of ELBO for diffusion SDEs by different techniques. In this section, we follow the analysis of <cit.> that adopts a Feymann-Kac representation on p_T then using the Girsanov change of measure theorem to obtain a trainable ELBO. Here we mainly consider when g^2(t)≤β(t), DiffFlow can also be seen as a mixed particle dynamics between SDMs and GANs. Recall in this case, DiffFlow can be rewritten as, dX_t =[g^2(t)/2∇log q(u(t)X_t; σ(t))]dt + √(g^2(t)-λ^2(t))dW_t_SDMs + [(β(t)-g^2(t)/2)∇logq(u(t)X_t; σ(t))/p_t(X_t)+f(X_t, t) ]_Regularized GANsdt . Given a time-indexed discriminator d_θ_D^t(x, t): ℝ^d× [0, T]→ℝ^d using logistic regression d_θ_D^t(x, t)≈logq(u(t)X_t; σ(t))/p_t(X_t) and a score network s_θ_c^t(x, t): ℝ^d× [0, T]→ℝ^d using score mathcing s_θ_c^t(x, t)≈∇log q(u(t)X_t; σ(t)) . Then the approximated process is given by the following neural SDE: dX_t =[g^2(t)/2s_θ_c^t(X_t, t) ]dt + √(g^2(t)-λ^2(t))dW_t + [(β(t)-g^2(t)/2)∇ d_θ_D^t(X_t, t) +f(X_t, t) ]dt . We need to answer the question of how to train the score networks s_θ_c^t(x, t) and the time-indexed discriminator d_θ_D^t(x, t) that optimizes the ELBO of the likelihood log p_T(x). Following the analysis of <cit.>, in order to obtain a general connection between maximal likelihood estimation and neural network training, we need to apply the Girsanov formula to obtain a trainable ELBO for the likelihood of the terminal marginal density. Before we introduce our main theorem, we need the following two well-known results from stochastic calculus. The first one is Feymann-Kac Formula, adapted from Theorem 7.6 in <cit.>. Suppose u(t, x): [0, T]×ℝ^d→ℝ is of class C^1, 2([0, T]×ℝ^d]) and satisfies the following PDE: ∂ u(t, x)/∂ t + c(x, t)u(t, x) + σ(t)^2/2∇·∇ u(t, x)+b(t, x)·∇ u(t, x) = 0 with terminal condition u(T, x) = u_T(x). If u(t, x) satisfies the polynomial growth condition max_0≤ t≤ T|u(t, x)| ≤ M(1+x^2μ), x∈ℝ^d for some M>0 and μ≥ 1. Then u(t, x) admits the following stochastic representation u(t, x) = 𝔼[ u_T(X_T)exp(∫_t^T c(X_s, s) ds) | X_t = x] where {X_s}_t≤ s≤ T solves the following SDE with initial X_t = x, dX_s = b(t, X_s)dt + σ(t) dW_t . Then we need the well-known Girsanov Theorem to measure the deviation of path measures. Let (Ω,ℱ,ℙ) be the underlying probability space for which W_s is a Brownian motion. Let W_s be an ito process solving dW_s = a(ω, s)ds + dW_s for ω∈Ω, 0≤ s≤ T and W_0=0  and a(ω, s) satisfies the Novikov's condition, i.e., 𝔼[exp( 1/2∫_0^Ta^2(ω, s)ds)]<∞ . Then W_s is a Brownian motion w.r.t. ℚ determined by logdℙ/dℚ(ω) = ∫_0^Ta(ω, s)· dW_s + 1/2∫_0^Ta(ω, s)^2 ds . With the above two key lemmas, we are able to derive our main theorem. Let {x̂(t)}_t∈ [0, T] be a stochastic processes defined by (<ref>) with initial distribution x̂(0)∼ q_0(x). The marginal distribution of x̂(t) is denoted by q_t(x). Then the log-likelihood of the terminal marginal distribution has the following lower bound, log q_T(x)≥𝔼_Y_T[log q_0(Y_T)| Y_0 = x] +1/2∫_0^Tσ^2(T-s)𝔼_Y_s|Y_0=x[∇log p(Y_s|Y_0=x)_2^2 ]ds - 1/2∫_0^T𝔼_Y_s|Y_0=x[c(Y_s, T-s;θ_s)/σ(T-s)-σ(T-s)∇log p(Y_s|Y_0=x)_2^2 ]ds . where dY_s = σ(T-s)dW_s  , and c(x, t;θ_t) = f(x, t) + g^2(t)/2s_θ_c^t(x, t) + (β(t)-g^2(t)/2)∇ d_θ_D^t(x, t)  , and σ^2(t)= g^2(t)-λ^2(t) . By Fokker-Planck equation, we have ∂ q_t(x)/∂ t +∇· c(x, t;θ_t)q_t(x) +c(x, t;θ_t)·∇ q_t(x) +σ^2(t)/2∇·∇ q_t(x)=0 where c(x, t;θ_t) = f(x, t) + g^2(t)/2s_θ_c^t(x, t) + (β(t)-g^2(t)/2)∇ d_θ_D^t(x, t)  , and σ^2(t)= g^2(t)-λ^2(t) . Let the time-reversal distribution v_t(x) = q_T-t(x) for 0≤ t≤ T, then v_t(x) satisfies the following PDE, ∂ v_t(x)/∂ t -∇· c(x, T-t;θ_s)v_t(x) -c(x, T-t;θ_s)·∇ v_t(x) -σ^2(T-t)/2∇·∇ v_t(x)=0 . By Feymann-Kac formula, we have q_T(x) = v_0(x) = 𝔼[ q_0(Y_T)exp(-∫_0^T ∇· c(Y_s, T-s;θ_s) ds) | Y_0 = x] where Y_s is a diffusion process solving dY_s = -c(X_s, T-s;θ_s)ds + σ(T-s)dW_s . By Jensen's Inequality, log q_T(x) = log𝔼_ℚ[ dℙ/dℚ q_0(Y_T)exp(-∫_0^T ∇· c(Y_s, T-s;θ_s) ds) | Y_0 = x] ≥𝔼_ℚ[ logdℙ/dℚ +log q_0(Y_T) -∫_0^T ∇· c(Y_s, T-s;θ_s) ds | Y_0 = x] . Now, if we choose dW_s = a(ω, s)ds + dW_s and ℚ as logdℙ/dℚ(ω) = ∫_0^Ta(ω, s)· dW_s + 1/2∫_0^Ta(ω, s)^2 ds = ∫_0^Ta(ω, s)· (dW_s -a(ω, s)ds ) + 1/2∫_0^Ta(ω, s)^2 ds = ∫_0^Ta(ω, s)· dW_s - 1/2∫_0^Ta(ω, s)^2 ds Then dW_s is Brownian motion under ℚ measure and log q_T(x) ≥𝔼_ℚ[ ∫_0^Ta(ω, s)· dW_s - 1/2∫_0^Ta(ω, s)^2 ds +log q_0(Y_T) -∫_0^T ∇· c(Y_s, T-s;θ_s) ds | Y_0 = x] = 𝔼_ℚ[ -1/2∫_0^Ta(ω, s)^2 ds +log q_0(Y_T) -∫_0^T ∇· c(Y_s, T-s;θ_s) ds | Y_0 = x] = 𝔼_Y_T[log q_0(Y_T)| Y_0 = x] - 𝔼_ℚ[ 1/2∫_0^T[a(ω, s)^2 + ∇· c(Y_s, T-s;θ_s) ]ds | Y_0 = x]  . Furthermore, we have dY_s = -c(Y_s, T-s;θ_s) ds + σ(T-s)dW_s = -(c(Y_s, T-s;θ_s) + σ(T-s)a(ω, s))ds + σ(T-s)dW_s By choosing appropriate a(ω, s), we can obtain a trainable ELBO. In particular, we choose a(ω, s) = - c(Y_s, T-s;θ_s) / σ(T-s) . Then we have dY_s = σ(T-s)dW_s  . and log q_T(x) ≥𝔼_Y_T[log q_0(Y_T)| Y_0 = x] - 1/2∫_0^T𝔼_Y_s[(c(Y_s, T-s;θ_s)^2/σ^2(T-s) + ∇· c(Y_s, T-s;θ_s) ) | Y_0 = x]ds = 𝔼_Y_T[log q_0(Y_T)| Y_0 = x] +1/2∫_0^Tσ^2(T-s)𝔼_Y_s|Y_0=x[∇log p(Y_s|Y_0=x)_2^2 ]ds - 1/2∫_0^T𝔼_Y_s|Y_0=x[c(Y_s, T-s;θ_s)/σ(T-s)-σ(T-s)∇log p(Y_s|Y_0=x)_2^2 ]ds . Then we can obtain the objective of the maximum likelihood inference by jointly training a weighted composite network c(x, t;θ_t) = f(x, t) + g^2(t)/2s_θ_c^t(x, t) + (β(t)-g^2(t)/2)∇ d_θ_D^t(x, t) to be some weighted version of denoising score matching. § RELATED WORK Our work is inspired from recent series of work on relating the training dynamics of GANs and SDMs to the particle evolution of ordinary or stochastic differential equations, see <cit.> and references therein. The training dynamics of vanilla GANs <cit.> is highly unstable due to the minimax objective. Since solving an optimal discriminator yields the some divergence minimization objective for GANs, and the divergence minimization naturally yields the particle algorithms and gradient flows, several subsequent works try to improve the stability of GAN training from the perspective of particle gradient flow, which avoids the minimax training framework <cit.>. The key idea is that the evolution of particles can be driven by the gradient flow of some distance measure between probability distributions, such as KL divergence or Wasserstein distance. Hence, the evolution dynamics can be an ODE and the driven term of ODE is determined by the functional gradient of the distance measure: for KL divergence, the functional gradient is the gradient field of the logistic classifier <cit.>, which plays the role of the discriminator from the perspective of vanilla GANs. The earliest development of diffusion models are mainly focused on learning a series of markov transition operators that maximize the ELBO <cit.>. In a parallel development, <cit.> propose a series of score-based generative models based on multiple levels of denoising score matching. Until 2020, the seminar work by <cit.> figures out that diffusion models are essentially score-based generative models with score matching on multiple noise levels of corrupted target distributions. <cit.> further shows that the sampling dynamics of diffusion models can be modelled as stochastic differential equations with scores of the noise-corrupted target as drift term. Since then, we can name diffusion models as score-based diffusion models. Despite the training dynamics of both GANs and score-based diffusion models can be modelled by particle algorithms whose dynamics is described by a respective differential equation, there lacks a unified differential equation that can describe the dynamics of both. Our main contribution is to propose such an SDE that enables us build a continuous spectrum that unifies GANs and diffusion models. § CONCLUSION We design a unified SDE “DiffFlow” that unifies the particle dynamics of the Langevin algorithm, GANs, diffusion SDEs, and diffusion ODEs. Our framework provides a continuous spectrum beyond SDMs and GANs, yielding new generative algorithms such as diffusion-GANs and SLCD. We provide the asymptotic convergence analysis of the DiffFlow and show that we can perform the maximal likelihood inference for both GANs and SDMs under the current SDEs framework. However, the objective for maximal likelihood inference requires the joint training the score network and discriminator network to be some weighted version of denoising score matching, which would be hard to implement efficiently. It would be interesting to further explore how to choose a better reference measure in the Girsanov change of measure theorem to achieve a simpler trainable ELBO for maximal likelihood inference in DiffFlow. apalike
http://arxiv.org/abs/2307.00290v1
20230701101246
All-in-SAM: from Weak Annotation to Pixel-wise Nuclei Segmentation with Prompt-based Finetuning
[ "Can Cui", "Ruining Deng", "Quan Liu", "Tianyuan Yao", "Shunxing Bao", "Lucas W. Remedios", "Yucheng Tang", "Yuankai Huo" ]
cs.CV
[ "cs.CV", "cs.LG" ]
All-in-SAM: from Weak Annotation to Pixel-wise Nuclei Segmentation with Prompt-based Finetuning This research was supported by NIH R01DK135597 (Huo), NSF CAREER 1452485, NSF 2040462,NCRR Grant UL1-01 (now at NCATS Grant 2 UL1 TR000445-06), NVIDIA hardware grant, resources of ACCRE at Vanderbilt University Can Cui Computer Science Vanderbilt University Nashville, USA Ruining Deng Computer Science Vanderbilt University Nashville, USA Quan Liu Computer Science Vanderbilt University Nashville, USA Tianyuan Yao Computer Science Vanderbilt University Nashville, USA Shunxing Bao Electrical and Computer Engineering Vanderbilt University Nashville, USA Lucas W. Remedios Computer Science Vanderbilt University Nashville, USA Bennett A. Landman Electrical and Computer Engineering Vanderbilt University Nashville, USA Yucheng Tang NVIDIA Cooperation Redmond, WA, USA Yuankai Huo Computer Science Vanderbilt University Nashville, USA [email protected] ==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== The Segment Anything Model (SAM) is a recently proposed prompt-based segmentation model in a generic zero-shot segmentation approach. With the zero-shot segmentation capacity, SAM achieved impressive flexibility and precision on various segmentation tasks. However, the current pipeline requires manual prompts during the inference stage, which is still resource intensive for biomedical image segmentation. In this paper, instead of using prompts during the inference stage, we introduce a pipeline that utilizes the SAM, called all-in-SAM, through the entire AI development workflow (from annotation generation to model finetuning) without requiring manual prompts during the inference stage. Specifically, SAM is first employed to generate pixel-level annotations from weak prompts (e.g., points, bounding box). Then, the pixel-level annotations are used to finetune the SAM segmentation model rather than training from scratch. Our experimental results reveal two key findings: 1) the proposed pipeline surpasses the state-of-the-art (SOTA) methods in a nuclei segmentation task on the public Monuseg dataset, and 2) the utilization of weak and few annotations for SAM finetuning achieves competitive performance compared to using strong pixel-wise annotated data. foundation model, SAM, Segment Anything, annotation, prompt § INTRODUCTION The foundation models have recently been proposed as a powerful segmentation model <cit.>. Segment Anything Model (SAM), as an example, was trained by millions of images to achieve a generic segmentation capability <cit.>. SAM can automatically segment a new image, and it also accepts the prompts input of foreground/background points or the box regions for better segmentation <cit.>. However, recent studies have revealed SAM's limited performance in specific domain tasks, such as medical image segmentation, particularly when an insufficient number of prompts are available <cit.>. The main reason is that medical data was rare to see in the training set of SAM while the medical segmentation tasks always in requirement of higher professional knowledge than natural image segmentation <cit.>. Using the finetuning strategy to adapt the generic segmentation model to downstream tasks provides a promising solution to utilize the power of the generic model in detecting low-level and general image patterns and features but adjust the final segmentation based on the characteristics and high-level understanding of downstream tasks. Previous approaches <cit.> have proposed finetuning methods to improve SAM's performance in downstream tasks. However, these methods mostly require complete data annotation for finetuning and did not explore the impact of weak annotation and few training data on the finetuning of the pretrained SAM model. Nuclei segmentation is a crucial task in biomedical research and clinical applications, but manual annotation of nuclei in whole slide images (WSIs) is time-consuming and labor-intensive. Previous works attempted to automatically segment nuclei with supervised learning <cit.>. More recently, some methods used self-supervised learning to further improve the model performance <cit.>. SAM has great potential to benefit nuclei segmentation if it can be adapted appropriately. This paper investigates the performance of transferring the SAM to nuclei segmentation. Previous studies have indicated that SAM performed poorly in nuclei segmentation without box/point information as prompts, but achieved promising segmentation when the bounding box of every nuclei was provided as the prompt in the inference stage. However, manually annotating all the boxes during inference remains time-consuming. To address this issue, we propose a pipeline for label-efficient finetuning of SAM, with no requirement for annotation prompts during inference. Also, instead of relying on complete annotations for finetuning, we leverage weak annotations to further reduce annotation costs while achieving comparable segmentation performance to state-of-the-art (SOTA) methods. In this work, we proposed the All-in-SAM pipeline, utilizing the pretrained SAM for annotation generation and model finetuning. Instead of using prompts during the inference stage, no manual prompts are required during the inference stage (Fig. <ref>). The contribution of this work can be summarized into two points: 1) Utilization of weak annotations for cost reduction: Rather than relying exclusively on fully annotated data for finetuning, we demonstrate the effectiveness of leveraging weak annotations and the pretrained SAM. This approach helps to minimize annotation costs while achieving segmentation performance that is comparable to the current state-of-the-art methods. 2) Development of a pipeline for label-efficient finetuning: We propose a method that allows SAM to be finetuned for nuclei segmentation without the requirement of annotation prompts during inference. This significantly reduces the time and effort involved in manual annotation. Overall, this work aims to enhance the application of SAM in nuclei segmentation by addressing the annotation burden and cost issues through label-efficient finetuning and the utilization of weak annotations. § METHOD §.§ Overview Motivated by the promising performance of the SAM model in interactive segmentation tasks with sparse prompts and the potential for finetuning, we propose a segmentation pipeline that leverages weak and limited annotations and apply this pipeline to the nuclei segmentation task. The proposed pipeline consists of two main stages: SAM-empowered annotation and SAM finetuning. In the first stage, we utilize the pretrained SAM model to generate high-quality approximate nuclei masks for pathology images. This is achieved by providing the bounding boxes of nuclei as input to the pretrained SAM model. These approximate masks serve as initial segmentation masks for the nuclei. In the second stage, the generated approximate masks are employed to finetune the SAM model, which allows the model to adapt and refine its segmentation capabilities specifically for nuclei segmentation. The proposed pipeline is displayed in Fig. <ref>. Two stages are introduced in detail in <ref> and <ref>. Furthermore, we evaluate the performance of the model when only a small number of annotated data for downstream tasks. By minimizing the number of annotated samples, we aim to reduce annotation labor while still achieving satisfactory segmentation results. §.§ SAM-empowered annotation The SAM model consists of three key components: the prompt encoder, the image encoder, and the mask decoder. The image encoder utilizes the Vision Transformer (ViT) as its backbone, employing a 14×14 windowed attention mechanism and four equally spaced global attention blocks to learn image representations effectively. The prompt encoder can take two forms: sparse or dense. In the sparse form, prompts can be in the form of points, boxes, or text, whereas in the dense form, prompts are represented as a grid or mask. The encoded prompts are then added to the image representation for the subsequent mask decoding process. In a previous study <cit.>, it was observed that when only automatically generated dense prompts were used, nuclei segmentation sometimes failed to produce satisfactory results. However, significant improvement was achieved when weak annotations such as points or boxes were provided during the segmentation inference. Notably, when the bounding box of nucleus was available as a weak annotation, the segmentation achieved a dice value of 0.883 in the public Monuseg dataset <cit.>, significantly surpassing the results obtained from supervised learning methods. It indicates that SAM has strong capabilities in edge detection, enabling clear detection of nuclei boundaries within focus regions. This makes it a potential tool to generate precise approximate masks, which can enhance supervised learning approaches with lower annotation costs. §.§ SAM-finetuning SAM has been trained on a large dataset for generic segmentation tasks, giving it the ability to perform well in general segmentation. However, when applied to specific tasks, SAM may exhibit suboptimal performance or even fail. Nonetheless, if the knowledge accumulated by SAM can be transferred to these specific tasks, it holds great potential for achieving better performance compared to training the model from scratch using only downstream task data, especially when the available data for the downstream task is limited. To optimize the transfer of knowledge, rather than finetuning the entire large pretrained model, a more effective and efficient approach is to selectively unfreeze only the last few layers. However, in our experiments, this approach still yielded inferior results compared to some baselines. Recently, there has been growing attention in the natural language processing community toward the use of adapters as an effective tool for finetuning models for different downstream tasks by leveraging task-specific knowledge <cit.>. In line with this, Chen et al. <cit.> successfully adapted prompt adapters <cit.> in the finetuning process of SAM. Specifically, they automatically extracted and encoded the texture information of each image as handcrafted features, which were then added to multiple layers in the encoder. Additionally, the proposed prompt encoder, along with the unfrozen lightweight decoder, became learnable during the finetuning process. Following their work, we implement such finetuning strategy in the nucleus segmentation task, but we explore more about its performance in different numbers of training data scenarios. § DATA AND EXPERIMENTS §.§ Data and Task For evaluating the model performance, we employ the MICCAI 2018 Monuseg dataset <cit.>. It consists of 30 training images and 14 testing images, all with dimensions of 1000×1000 pixels. Each image is accompanied by corresponding masks of nuclei. To ensure a fair and comparable evaluation, we use the same data split as a recent study <cit.>. The 30 training images are divided into two subsets, with 24 images assigned to the training set and the remaining 6 images forming the validation set. To evaluate the model performance of nucleus segmentation, Dice, AUC, Recall, Precision, best F1 (maximized F1 score at the optimal threshold), IoU (Intersection over Union) and ADJ (Adjusted Rand Index) are calculated. §.§ Experiment Setting In this work, we designed 3 sets of experiments to explore the performance of finetuned SAM on the nucleus segmentation task. 1) Finetuned by complete annotation or weak annotation. For complete annotation, the pixel-wise complete masks were provided for training data to finetune the pretrained SAM model. As for the weak annotation, only the bounding boxes of nuclei were provided. In this work, the bounding boxes were automatically prepared by using the complete masks. And then, these bounding boxes were used as the prompts in the pretrained SAM to generate pixel-level pseudo labels for finetuning. 2) Finetuned by different numbers of annotated data. To evaluate the performance of the proposed pipeline finetuned with different numbers of data, we adjusted the number of annotated images and the area of annotated regions. The complete training set contains 24 1000×1000 image patches with corresponding annotations. In the 4% training data set, only a 200×200 random patch was selected from each large patch for annotation. To keep the parameters of the model unchanged for a fair comparison, the rest area without annotation was set to intensity 0. In the extreme cases, only 3 patches (1 from each image) in the size of 200×200, taking up 0.5% of the original complete dataset, were randomly selected. 3) Comparison with other SOTA. In this study, we conducted a performance comparison between our proposed pipeline and other state-of-the-art (SOTA) methods. LViT <cit.> is a recently proposed model integrating language information and images for annotation and achieved SOTA performance in Monuseg dataset. We followed their data splits in our experiments. BEDs <cit.> integrated the self-ensemble and testing stage stain augmentation mechanism in UNet models for nuclei segmentation. Although more data were used for training, the model was evaluated in the same testing set. So, the comparable performance results of LViT <cit.> and BEDs <cit.> are from their paper. For the task of learning from a small annotated dataset, Xie et al. <cit.> proposed to use self-supervised learning to utilize the unlabeled data and achieved better results when only a small number of annotated training data were available. Their proposed method was implemented on the same Monuseg training and testing dataset, so their results were reported here for comparison with ours. In addition, nnUNet, as a popular benchmark in the medical image segment, was run by ourselves on the Monuseg with the same dataset setting as our proposed pipeline. To ensure a fair comparison, we used the default settings of nnUNet and trained it for the default 1000 epochs. About other settings in our proposed pipeline, the ViT-H backbone was used in both the annotation generation and fine-tuning stages. The training would early stop if the validation loss did not decrease for consecutive 40 epochs. Without specific mention, other default parameters and settings in SAM-adapter<cit.> were kept. All experiments were repeated three times for average evaluation values. An RTX A6000 was used to run these experiments. § RESULTS AND DISCUSSION Table 1 shows the comparison of our results with other SOTA methods. The best performance was observed when training with the whole training set and complete annotation. Notably, the nnUNet <cit.>, our proposed method, and Xie's method <cit.> demonstrated similar performance in this scenario. However, when training with a reduced number of annotated data, our proposed method exhibited a smaller drop in performance compared to other methods and achieved superior results. Additionally, when employing weak labels for training, the proposed method consistently outperformed other methods, maintaining the highest performance. Table 2 and Fig. <ref> provide a comprehensive view of the evaluation metrics and show the performance under extreme cases where only 0.5% of the training set data is available. In various evaluation metrics such as Dice, AUC, Precision, bestF1, IoU, and ADJ, the proposed method consistently outperformed nnUNet across different settings. This was particularly evident when utilizing limited and poorly annotated data. However, when training with fewer and weakly annotated data, the proposed method exhibited a lower Recall value compared to nnUNet. Despite this, nnUNet displayed a more aggressive approach to segmenting nuclei, resulting in significantly lower precision and other metrics. § CONCLUSION In summary, we introduce an efficient and effective pipeline that leverages a pretrained self-attention mechanism (SAM) for nuclei segmentation with limited annotation. The experiments show the capability of the pretrained SAM model to generate pseudo labels from weak annotations and subsequently finetune with these pseudo labels during the inference phase. This approach achieves competitive performance when compared to state-of-the-art (SOTA) methods while significantly reducing the burden of manual annotation. This pipeline holds great significance in real-world applications of nuclei segmentation, as it offers a practical solution that minimizes annotation efforts without compromising on segmentation accuracy. IEEEtran
http://arxiv.org/abs/2307.00783v1
20230703070142
Monte Carlo Policy Gradient Method for Binary Optimization
[ "Cheng Chen", "Ruitao Chen", "Tianyou Li", "Ruichen Ao", "Zaiwen Wen" ]
math.OC
[ "math.OC", "cs.AI", "cs.LG", "90C09, 90C27, 90C59, 60J45, 60J20" ]
assumptionAssumption PropositionProposition DefinitionDefinition CorollaryCorollary RemarkRemark AssumptionAssumption ConditionCondition conditionCondition
http://arxiv.org/abs/2307.00445v1
20230702001521
Reconstruction of Stochastic Dynamics from Large Datasets
[ "William Davis" ]
physics.data-an
[ "physics.data-an", "cond-mat.stat-mech" ]
APS/123-QED [email protected] Cecil H. and Ida M. Green Institute of Geophysics and Planetary Physics, Scripps Institution of Oceanography, University of California, San Diego, La Jolla, CA 92037 The complex dynamics of physical systems can often be modeled with stochastic differential equations. However, computational constraints inhibit the estimation of dynamics from large time-series datasets. I present methods for estimating drift and diffusion functions from inordinately large datasets through the use of incremental, online, updating statistics. I demonstrate the validity and utility of these methods by analyzing three large, varied synthetic datasets, as well as an empirical turbulence dataset. These methods are amenable to integration into existing stochastic estimation software packages, and hopefully will facilitate applications in “big data” problems. Reconstruction of Stochastic Dynamics from Large Datasets William Davis August 1, 2023 ========================================================= § INTRODUCTION The dynamics of complex systems with many degrees of freedom can often be modeled as continuous-time stochastic processes <cit.>. When a system is modeled by a stochastically-forced, scalar, first-order differential equation, the temporal evolution of a quantity X(t) is described by a Langevin-type equation <cit.> d/dtX(t) = f(X) + g(X)Γ(t), where a separation of scales partitions the dynamics of X(t) into slow changes modulated by f(X), and rapidly-varying changes modulated by g(X). Fluctuations are driven by Gaussian white noise Γ(t), with ⟨Γ(t)⟩=0 and ⟨Γ(t)Γ(t^')⟩=δ(t-t^'). Here and throughout, the Itô interpretation is adopted. If X(t) contains no discontinuous jumps, then the evolution of the probability density function can be described by the Fokker-Planck equation <cit.> ∂/∂ t p(x,t|x^',t^') = [-∂/∂ x D^(1)(x) + ∂^2/∂ x^2 D^(2)(x) ] p(x,t|x^',t^') where p(∘|∘) is the transition probability, and x and x^' are state variables of X. The Fokker-Planck equation contains the Kramers-Moyal (KM) coefficients D^(k)(x) = lim_τ→01/n!τ∫_-∞^∞[x^' - x]^k p(x^',t+τ|x,t) dx^'. The k=1 and k=2 KM coefficients are called the drift and diffusion functions, respectively, and they correspond to terms in the dynamical equation (<ref>), with f(x)=D^(1)(x) and g(x)=√(2D^(2)(x)). It has been shown that KM coefficients—and hence drift and diffusion functions—can be estimated from empirical samples of X(t), using a conditional averaging technique called “direct estimation” <cit.>. Direct estimation and descendant methods <cit.> have been applied to time-series data in various fields of science <cit.>, including turbulence <cit.>, wind energy <cit.>, climate data <cit.>, and geomagnetic field variations <cit.>. Although the calculation of KM coefficients is conceptually simple <cit.>, estimations are prone to bias, especially in areas of rarely-sampled state space <cit.>. Inaccuracies are particularly apparent for processes with heavy tails, or for systems that exhibit rare, transient dynamics. Attempts to resolve KM coefficients in rarely-sampled regions by reducing the resolution of conditioning also result in biased drift and diffusion estimates <cit.>. A rudimentary but effective solution to the sampling problem is to perform analyses on datasets that are as large as possible. This approach is effective because the estimation bias of KM coefficients scales as 1/√(NΔ t), where N is the number of samples and Δ t is the sampling interval <cit.>. Indeed in the era of “big data,” there is growing interest in estimating drift and diffusion functions for increasingly large scientific datasets <cit.>. However, existing KM procedures calculate KM coefficients using offline methods <cit.>, requiring the complete dataset to be available at once, and with memory requirements that scale with the number of data points. Large datasets are often incompatible with offline methods, either because the data cannot fit into computer memory, or because the data originates from arbitrarily large data streams <cit.>. An alternative approach is to use online methods, which incrementally update statistical estimates from streamed data, arriving one data point at a time <cit.>. In this paper I present online methods of computing KM coefficients from streamed time-series data, enabling the estimation of drift and diffusion functions from large time-series datasets which are unreachable with previous methods. § ESTIMATION OF CONDITIONAL MOMENTS Consider a finite sample of N points in X(t) from process (<ref>), denoted as 𝒮_N := { (t_1,X_1), (t_2,X_2), …, (t_N,X_N)}. Here I assume a regular sampling interval Δ t. The aim is to use these data to construct non-parametric estimates of drift and diffusion coefficients of the Langevin-type equation that generated X(t). Estimation of drift and diffusion coefficients is conducted at a set of N_x evaluation points in x, represented by the vector 𝒳 := [x_1, x_2, …, x_N_x]. Drift and diffusion estimates at these points will be denoted by the vector 𝐃̂^(k), with D̂^(k)_j := D̂^(k)(𝒳_j). Estimation of 𝐃̂^(k) requires evaluation of the conditional process increments—or “conditional moments” <cit.>—in (<ref>), namely M^(k)(τ,x) = ∫_-∞^∞ [x^' - x]^k p(x^', t + τ| x,t) dx^', for k=1,2. As the τ→ 0 limit in (<ref>) cannot be performed for empirical data, (<ref>) is estimated at a set of N_τ evaluation points in τ values, represented by the vector 𝒯 := [Δ t, 2Δ t, …, N_τΔ t]^T. Estimates of conditional moments (<ref>) are performed at all points in 𝒯 and 𝒳, and will be denoted as N_τ× N_x matrices 𝐌̂^(k), with M̂^(k)_ij := M̂^(k)(𝒯_i,𝒳_j). I now outline an existing estimation procedure for conditional moments, before proposing online updating formulae. §.§ Offline calculation One method of estimating conditional moments is Kernel-Based Regression (KBR) <cit.>. A chosen kernel function K(·) applies conditioning on the state variable, x, and, assuming ergodicity, the estimators for (<ref>) can be written as M̂^(k)_ij = ∑_n=1^N-i K_h(𝒳_j - X_n) [X_n+i - X_n]^k/∑_n=1^N-i K_h(𝒳_j - X_n), for k=1,2, where K_h(·) = K(·/h)/h is a scaling of the kernel with bandwidth h. Here I use the Epanechnikov kernel <cit.> K(x) = 3/4(1 - x^2) if x^2<1, 0 otherwise, which has computationally favorable properties <cit.>. If kernel conditioning is replaced with bin counting, the estimation becomes Histogram-Based Regression (HBR) <cit.>. Some studies also analyze the variance of the conditional process increments <cit.>. I will refer to this quantity as the “conditional variance,” and denote it as M̂^(2^*)_ij = ∑_n=1^N-i K_h(𝒳_j - X_n) ([X_n+i - X_n] - M̂^(1)_ij)^2/∑_n=1^N-i K_h(𝒳_j - X_n). Both HBR and KBR are implemented in modern software libraries <cit.>, and can be extended to irregularly-sampled time-series data <cit.>. However, these offline methods require the entire input 𝒮_N to be available at once: the entire calculation must be repeated if more data is appended to 𝒮_N. §.§ Online calculation I now present formulae for updating sample conditional moments (<ref>), previously calculated from 𝒮_N-1, with a single new observation (t_N, X_N). I refer to this approach as “Online Kernel-Based Regression” (OKBR). To facilitate indexing, the subscript notation […]|_n denotes a quantity calculated from the first n observations. The updating formulae are written as (see Appendix <ref>) M̂^(k)_ij|_N = M̂^(k)_ij|_N-1 + K_h(𝒳_j - X_N-i) ×([X_N - X_N-i]^k - M̂^(k)_ij|_N-1)/W_ij|_N, for k=1,2, where W_ij|_N are cumulative weights W_ij|_N = W_ij|_N-1 + K_h(𝒳_j - X_N-i). To define a corresponding updating formula for (<ref>), I introduce the intermediate quantity S_ij|_N, which corresponds to the weighted sum of squares of differences from the current mean, S_ij|_N := ∑_n=1^N-i K_h(𝒳_j - X_n) ([X_n+i - X_n] - M̂^(1)_ij|_N)^2, and is related to (<ref>) by M̂^(2^*)_ij|_N = S_ij|_N/W_ij|_N. The corresponding online formula is (see Appendix <ref>) S_ij|_N = S_ij|_N-1 + K_h(𝒳_j - X_N-i) ×([X_N - X_N-i] - M̂^(1)_ij|_N-1) ×([X_N - X_N-i] - M̂^(1)_ij|_N), These formulae have been constructed to avoid numerical instability and loss of precision <cit.>. In the next section, I validate the presented methods on three synthetic datasets. § NUMERICAL EXAMPLES §.§ Ornstein-Uhlenbeck process I examine a simple example where the drift and diffusion functions are set as D^(1)(x) = -x, D^(2)(x) = 1. I numerically integrate <cit.> this process using a sampling interval of Δ t=10^-3 for N=10^7 data-points. I estimate conditional moments at 26 equally-spaced points in the range [-5,5] using a bandwidth of h=0.4, and perform time sampling at a single time-step 𝒯=[Δ t]. I conduct estimation using both the KBR formulae (<ref>) and OKBR formulae (<ref>). To illustrate the ability of OKBR to conduct analysis on an inordinately large dataset, I also repeat the OKBR estimation for a simulated time-series with N=10^10 data-points. For all three cases, I estimate drift and diffusion coefficients from the conditional moments using direct estimation <cit.> 𝐃̂^(k) = 1/k!Δ t𝐌̂^(k). Results are shown in Fig. <ref>. I find that for the N=10^7 case, KBR and OKBR give identical estimates for the drift and diffusion coefficients, and the coefficients |x|≲ 2 are estimated fairly. However at the rarely sampled edges, either large errors are present or there are no samples available to make an estimate. For the N=10^10 case, OKBR accurately recovers the drift and diffusion coefficients over the entire estimation range. It is not possible to use KBR on the N=10^10 dataset, as the data does not fit within computer memory. §.§.§ Empirical performance of estimation procedures To empirically benchmark the time and space requirements for KBR and OKBR, I repeat the estimations in Section <ref>, varying the number of data-points N and leaving all other parameters unchanged. The number of data-points considered is N∈(10^4, 10^5, …, 10^10), however estimation using the largest dataset using KBR is not possible due to memory requirements. Table <ref> shows the benchmark results. Both methods show linear scaling in time. KBR shows linear scaling in space, whereas the space requirements of OKBR scale to a constant, independent of N. §.§ Tri-stable system I consider a system which exhibits poorly-sampled regions of state space, arising from fast, transient dynamics through unstable (or metastable) states. One natural example of such a system is the time-variability of the axial dipole moment of Earth's geomagnetic field, which shows two prominently stable states at positive and negative polarity, and an unstable (or possibly metastable) “weak” state during polarity transitions <cit.>. The qualitative dynamics of this system can be represented by the toy model D^(1)(x) = - x + 27x^3 -24x^5, D^(2)(x) = 7/10. This system is characterized by two strong attractors at x≈± 1, and one weaker, rarely sampled attractor at x=0. Here, one might aim to determine the stability of the middle state from empirical data. I integrate system (<ref>) with a sampling interval of Δ t=10^-4, using N=5×10^7 and N=10^10 points for KBR (<ref>) and OKBR (<ref>), respectively. I estimate conditional moments at 45 equally-spaced points in the interval [-1.4,1.4] using a bandwidth of h=0.03, and perform sampling in τ at a series of time-steps 𝒯=[Δ t, 2Δ t, 3Δ t, 4Δ t]^T. I estimate drift and diffusion coefficients in the τ→ 0 limit in (<ref>) by minimizing false 𝐃̂^(k) = (𝒯^T𝒯)^-1𝒯^T 𝐌̂^(k). (𝒯^T𝒯)𝐃̂^(k) = 𝒯^T 𝐌̂^(k). V(𝐃̂^(k)) = ||𝐌̂^(k) - 𝒯𝐃̂^(k)||^2, using ordinary least squares. Results are shown in Fig. <ref>. I find that for the N=5×10^7 case, KBR is able to reasonably recover the drift and diffusion coefficients close to the attractors at x≈± 1. However, poor estimates are made for the rarely-sampled transitions, for x ∈ [-0.5,0.5], and the details of stability at x=0 are unresolvable. For the N=10^10 case, OKBR accurately recovers the drift and diffusion coefficients across the entire sampling domain, revealing the presence of the weak attractor at x=0. §.§ Multiplicative and correlated noise I consider a system with a multiplicative diffusion term and an exponentially-correlated noise source η(t), d/dtX = D^(1)(X) + √(2D^(2)(X))η(t), d/dtη = -1/θη + 1/θΓ(t), where D^(1)(x) = - 1/8 - 9/4 x - 4/15 x^3, D^(2)(x) = 1 + 1/50x^2 + 1/40x^4, and θ=0.01 is the correlation time of the noise η(t), and Γ(t) is internal Gaussian white noise. Only the time-series of X(t) is observed. I analyze process (<ref>–<ref>) using the non-parametric inversion method of <cit.>, assuming that the timescale θ has already been estimated <cit.>. This method requires estimation of the sample conditional mean—k=1 in (<ref>) and (<ref>)—as well as the conditional variance, (<ref>) and (<ref>). I integrate the process with a sampling interval of Δ t=5×10^-3, using N=10^7 and N=5×10^9 points for KBR and OKBR, respectively. I estimate the conditional moments 𝐌̂^(k) at 100 equally-spaced points in the interval [-2.5,2.5] using a bandwidth of h=0.01, and perform sampling in τ using 25 time-steps, 𝒯=[Δ t, …, 25Δ t]^T. To estimate the drift and diffusion coefficients using the method of <cit.>, I decompose the sample conditional mean and variance into basis functions r_i(τ, θ) and coefficients λ_i^(k)(x), given by M^(k)(x, τ) ≈∑_i=1^3 λ_i^(k)(x) r_i(τ, θ). Here the basis functions are r_1(τ; θ) = τ - θ(1-e^-τ/θ), r_2(τ; θ) = τ^2/2 - θ r_1(τ; θ), r_3(τ; θ) = τ^3/6 - θ r_2(τ; θ), and are expressed in matrix form with elements R_ij:=r_j(𝒯_i). I solve for the coefficients by minimizing false λ^(k) = (R^TR)^-1R^T 𝐌̂^(k). V(λ^(k)) = ||𝐌̂^(k) - Rλ^(k)||^2, using ordinary least squares. Finally, I use the i=1 components of the coefficients to solve differential algebraic equations for estimates of the drift and diffusion coefficients 𝐃̂^(k); see <cit.> for details. The estimated drift and diffusion coefficients are shown in Fig. <ref>. I find that for the N=10^7 case, KBR is able to recover the drift and diffusion coefficients in the range x ∈ [-1,1], but poor estimates are made in the rarely sampled tails. For the N=5×10^9 case, OKBR is able to accurately recover the drift and diffusion coefficients over a much larger range. To illustrate the consequences of poorly-resolved tails, I use 𝐃̂^(k) to estimate the parametric coefficients of the diffusion function, D^(2)(x) = A + Bx^2 + Cx^4. Parameter estimates in Table <ref> show that both the quadratic and quartic coefficients are poorly resolved for the KBR case, with uncertainty intervals overlapping zero. However, the increased resolution that OKBR enables results in accurate parameter estimation. false § APPLICATION TO TURBULENCE DATA To illustrate one possible application of OKBR, I examine a turbulence dataset from <cit.>. This dataset—originally published by <cit.>—comes from a turbulent air jet experiment, where time-variable observations of local air velocity were made using hot-wire measurements. The dataset comprises N=1.25× 10^7 points sampled at 8 kHz, although other turbulence datasets can be orders of magnitude larger <cit.>. The data can be used to investigate a statistical description of a turbulent cascade <cit.>. The measurements, under the assumption of Taylor's hypothesis of frozen turbulence, reflect spatial velocity variations u(x). Increments of these velocity variations ξ_n,i := ξ(x_n,r_i) = u(x_n) - u(x_n - r_i), define a “zooming-in” process in ξ for decreasing r. Following the phenomenological model of <cit.>, velocity increments evolve as a Markov process in scale r. From this, the turbulent cascade is interpreted as a stochastic process described by a Fokker-Planck equation evolving through a sequence of velocity increments ξ_n,0,ξ_n,1,ξ_n,2,… at increasingly smaller scales r_0>r_1>r_2>…. One can use the empirical velocity measurements to not only verify the Markov property of ξ(r), but also to estimate the corresponding drift and diffusion coefficients <cit.>. The conditional moments for two increment scales separated by δ are defined as M^(k)(δ,ξ,r,u_N) = ∫_-∞^∞ [ξ^'(r-δ,u_N) - ξ(r,u_N)]^k p(ξ^'| ξ,u_N) dξ^', for k=1,2. Then, the KM coefficients are given by <cit.> D^(k)(ξ,r,u_N) = r/k!lim_δ→ 01/δM^(k)(δ,ξ,r,u_N) Analogously to (<ref>), the online formulae for the estimator of (<ref>) can be written as M̂^(k)_ij|_N = M̂^(k)_ij|_N-1 + K_h(𝒳_j - ξ_N,0) ×([ξ_N,i - ξ_N,0]^k - M̂^(k)_ij|_N-1)/W_ij|_N, where W_ij|_N = W_ij|_N-1 + K_h(𝒳_j - ξ_N,0). I analyze the turbulence dataset comparably to <cit.> by normalizing the velocity by its variance, σ, and estimating conditional moments using the same parameters described in <cit.>, their Fig. 23. I use OKBR with a boxcar kernel and a bandwidth of h=0.038 to estimate conditional moments at a range of scaled separated by δ, from Δ_EM<δ<2Δ_EM, where Δ_EM is the Einstein-Markov length. KM coefficients are estimated in the δ→0 limit through linear extrapolation. The estimated drift and diffusion coefficients are shown in Fig. <ref>, exactly reproducing the previously-determined results of <cit.>. § DISCUSSION AND CONCLUSION I present online updating formulae for estimating conditional moments and variance from time-series data. These formulae enable the non-parametric estimation of drift and diffusion functions from arbitrarily large datasets, without requiring the entire set of input data to be available at once. I demonstrate this with three numerical examples. Even for datasets that far exceed the working memory of most computers, OKBR is able to generate accurate estimates of drift and diffusion functions, indicating utility in the analysis of exceedingly large scientific datasets. This method could thus be incorporated into existing software packages  <cit.>. Additionally, OKBR is applied to a turbulence dataset. The estimated drift and diffusion functions reproduce previously-determined results, indicating that OKBR may be a valuable method for streamed instrument data. The methods presented here are demonstrated in one dimension; however, extensions to higher dimensions are straightforward. Extensions cannot be assumed for higher-order conditional moments (k>2 in M̂^(k)_ij), as updating formulae for skewness, kurtosis, and other moments are non-trivial <cit.>. Further work should seek to extend the online framework to higher-order conditional moments. Although OKBR reduces the memory complexity to calculate conditional moments from 𝒪(N) to 𝒪(1), the time complexity remains at 𝒪(N). However, as detailed by <cit.>, online formulae can sometimes be altered for calculation by multiple processing units in parallel. It may thus be possible to estimate conditional moments in sub-linear time <cit.>. I thank Matthias Morzfeld, Catherine Constable, and Katherine Armstrong for helpful discussions which benefited this research. The code for implementing the estimation procedures are available at https://doi.org/10.5281/zenodo.8104832DOI:10.5281/zenodo.8104832. The dataset from <cit.> in Section <ref> is used under the GNU General Public License (GPL) version 3. This work is supported by the Green Foundation's John W. Miles postdoctoral fellowship in theoretical and computational geophysics. § DERIVATION OF INCREMENTAL QUANTITIES §.§ Weights and conditional moments First I define the cumulative weights, W_ij|_N := ∑_n=1^N-i K_h(𝒳_j - X_n). This is rearranged to permit incremental updates W_ij|_N = W_ij|_N-1 + K_h(𝒳_j - X_N-i). Next I derive incremental formulae for conditional moments (<ref>). Identifying the denominator of (<ref>) as (<ref>) and rearranging gives M^(k)_ij|_N· W_ij|_N = ∑_n=1^N-i K_h(𝒳_j - X_n)[X_n+i - X_n]^k. Separating the last term in the sum and substituting (<ref>) gives M̂^(k)_ij|_N· W_ij|_N = M̂^(k)_ij|_N-1·(W_ij|_N - K_h(𝒳_j - X_N-i)) + K_h(𝒳_j - X_N-i)[X_N - X_N-i]^k. Finally, dividing by W^[N]_ij and rearranging gives M̂^(k)_ij|_N = M̂^(k)_ij|_N-1 + K_h(𝒳_j - X_N-i) ×([X_N - X_N-i]^k - M̂^(k)_ij|_N-1)/W_ij|_N, as required by (<ref>). §.§ Conditional variance An online calculation of the conditional variance (<ref>) is achieved through incremental updating of the quantity S_ij|_N, the weighted sum of squares of differences from the current mean S_ij|_N := ∑_n=1^N-i K_h(𝒳_j - X_n) ([X_n+i - X_n] - M̂^(1)_ij|_N)^2. Derivation of an incremental formula for this expression uses equations (<ref>) and (<ref>), and follows in a similar fashion to Subsection <ref>: S_ij|_N S_ij|_N = [∑_n=1^N-i K_h(𝒳_j - X_n)[X_n+i - X_n]^2] - (M̂^(1)_ij|_N)^2 · W_ij|_N = S_ij|_N-1 + K_h(𝒳_j - X_N-i)[X_N - X_N-i]^2 + (M̂^(1)_ij|_N-1)^2 ·(W_ij|_N - K_h(𝒳_j - X_N-i)) - (M̂^(1)_ij|_N)^2 · W_ij|_N = S_ij|_N-1 + K_h(𝒳_j - X_N-i){[X_N - X_N-i]^2 - (M̂^(1)_ij|_N-1)^2} - W_ij|_N-1·(M̂^(1)_ij|_N - M̂^(1)_ij|_N-1)(M̂^(1)_ij|_N + M̂^(1)_ij|_N-1) = S_ij|_N-1 + K_h(𝒳_j - X_N-i){[X_N - X_N-i]^2 - (M̂^(1)_ij|_N-1)^2 - ([X_N - X_N-i] - M̂^(1)_ij|_N-1) (M̂^(1)_ij|_N + M̂^(1)_ij|_N-1) } = S_ij|_N-1 + K_h(𝒳_j - X_N-i) ((X_N - X_N-i) - M̂^(1)_ij|_N-1) ((X_N - X_N-i) - M̂^(1)_ij|_N). 46 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Haken(2004)]haken2004synergetics author author H. Haken, @noop title Synergetics: Intoduction and advanced topics (publisher Spinger-Verlag, year 2004)NoStop [Risken(1996)]risken1996fokker author author H. Risken, in @noop booktitle The Fokker-Planck Equation (publisher Springer, year 1996) pp. pages 63–95NoStop [Siegert et al.(1998)Siegert, Friedrich, and Peinke]siegert1998analysis author author S. Siegert, author R. Friedrich, and author J. Peinke, @noop journal journal Physics Letters A volume 243, pages 275 (year 1998)NoStop [Gottschall and Peinke(2008)]gottschall2008definition author author J. Gottschall and author J. Peinke, @noop journal journal New Journal of Physics volume 10, pages 083034 (year 2008)NoStop [Böttcher et al.(2006)Böttcher, Peinke, Kleinhans, Friedrich, Lind, and Haase]bottcher2006reconstruction author author F. Böttcher, author J. Peinke, author D. Kleinhans, author R. Friedrich, author P. G. Lind, and author M. Haase, @noop journal journal Physical Review Letters volume 97, pages 090603 (year 2006)NoStop [Lind et al.(2010)Lind, Haase, Böttcher, Peinke, Kleinhans, and Friedrich]lind2010extracting author author P. G. Lind, author M. Haase, author F. Böttcher, author J. Peinke, author D. Kleinhans, and author R. Friedrich, @noop journal journal Physical Review E volume 81, pages 041125 (year 2010)NoStop [Scholz et al.(2017)Scholz, Raischel, Lopes, Lehle, Wächter, Peinke, and Lind]scholz2017parameter author author T. Scholz, author F. Raischel, author V. V. Lopes, author B. Lehle, author M. Wächter, author J. Peinke, and author P. G. Lind, @noop journal journal Physics Letters A volume 381, pages 194 (year 2017)NoStop [Lade(2009)]lade2009finite author author S. J. Lade, @noop journal journal Physics Letters A volume 373, pages 3705 (year 2009)NoStop [Honisch and Friedrich(2011)]honisch2011estimation author author C. Honisch and author R. Friedrich, @noop journal journal Physical Review E volume 83, pages 066701 (year 2011)NoStop [Rydin Gorjão et al.(2021)Rydin Gorjão, Witthaut, Lehnertz, and Lind]rydin2021arbitrary author author L. Rydin Gorjão, author D. Witthaut, author K. Lehnertz, and author P. G. Lind, @noop journal journal Entropy volume 23, pages 517 (year 2021)NoStop [Lehle and Peinke(2018)]lehle2018analyzing author author B. Lehle and author J. Peinke, @noop journal journal Physical Review E volume 97, pages 012113 (year 2018)NoStop [Friedrich et al.(2011)Friedrich, Peinke, Sahimi, and Tabar]friedrich2011approaching author author R. Friedrich, author J. Peinke, author M. Sahimi, and author M. R. R. Tabar, @noop journal journal Physics Reports volume 506, pages 87 (year 2011)NoStop [Tabar(2019)]tabar2019analysis author author R. Tabar, @noop title Analysis and data-based reconstruction of complex nonlinear dynamical systems, Vol. volume 730 (publisher Springer, year 2019)NoStop [Friedrich and Peinke(1997)]friedrich1997description author author R. Friedrich and author J. Peinke, @noop journal journal Physical Review Letters volume 78, pages 863 (year 1997)NoStop [Renner et al.(2001)Renner, Peinke, and Friedrich]renner2001experimental author author C. Renner, author J. Peinke, and author R. Friedrich, @noop journal journal Journal of Fluid Mechanics volume 433, pages 383 (year 2001)NoStop [Friedrich and Grauer(2020)]friedrich2020generalized author author J. Friedrich and author R. Grauer, @noop journal journal Atmosphere volume 11, pages 1003 (year 2020)NoStop [Milan et al.(2013)Milan, Wächter, and Peinke]milan2013turbulent author author P. Milan, author M. Wächter, and author J. Peinke, @noop journal journal Physical review letters volume 110, pages 138701 (year 2013)NoStop [Lind et al.(2005)Lind, Mora, Gallas, and Haase]lind2005reducing author author P. G. Lind, author A. Mora, author J. A. C. Gallas, and author M. Haase, @noop journal journal Physical Review E volume 72, pages 056706 (year 2005)NoStop [Buffett et al.(2013)Buffett, Ziegler, and Constable]buffett2013stochastic author author B. A. Buffett, author L. Ziegler, and author C. G. Constable, @noop journal journal Geophysical Journal International volume 195, pages 86 (year 2013)NoStop [Davis and Buffett(2021)]davis2021inferring author author W. Davis and author B. Buffett, @noop journal journal Geophysical Journal International volume 228, pages 1478 (year 2021)NoStop [Lamouroux and Lehnertz(2009)]lamouroux2009kernel author author D. Lamouroux and author K. Lehnertz, @noop journal journal Physics Letters A volume 373, pages 3507 (year 2009)NoStop [Gorjão and Meirinhos(2019)]gorjao2019kramersmoyal author author L. R. Gorjão and author F. Meirinhos, @noop journal journal Journal of Open Source Software volume 4, pages 1693 (year 2019)NoStop [Kleinhans and Friedrich(2007)]kleinhans2007quantitative author author D. Kleinhans and author R. Friedrich, in @noop booktitle Wind Energy (publisher Springer, year 2007) pp. pages 129–133NoStop [Raischel et al.(2014)Raischel, Moreira, and Lind]raischel2014human author author F. Raischel, author A. Moreira, and author P. G. Lind, @noop journal journal The European Physical Journal Special Topics volume 223, pages 2107 (year 2014)NoStop [Rinn et al.(2016)Rinn, Lind, Wächter, and Peinke]rinn2016langevin author author P. Rinn, author P. Lind, author M. Wächter, and author J. Peinke, @noop journal journal Journal of Open Research Software volume 4 (year 2016)NoStop [Gorjão et al.(2023)Gorjão, Witthaut, and Lind]gorjao2023jumpdiff author author L. R. Gorjão, author D. Witthaut, and author P. G. Lind, @noop journal journal Journal of Statistical Software volume 105, pages 1 (year 2023)NoStop [Wang et al.(2016)Wang, Chen, Schifano, Wu, and Yan]wang2016statistical author author C. Wang, author M.-H. Chen, author E. Schifano, author J. Wu, and author J. Yan, @noop journal journal Statistics and its interface volume 9, pages 399 (year 2016)NoStop [Karp(1992)]karp1992line author author R. M. Karp, in @noop booktitle Algorithms, Software, Architecture: Information Processing 92: Proceedings of the IFIP 12th World Computer Congress, Vol. volume 1 (year 1992) p. pages 416NoStop [Day and Zhou(2020)]day2020onlinestats author author J. Day and author H. Zhou, @noop journal journal Journal of open source software volume 5 (year 2020)NoStop [Epanechnikov(1969)]epanechnikov1969non author author V. A. Epanechnikov, @noop journal journal Theory of Probability &; Its Applications volume 14, pages 153 (year 1969)NoStop [Härdle et al.(2004)Härdle, Müller, Sperlich, and Werwatz]hardle2004nonparametric author author W. Härdle, author M. Müller, author S. Sperlich, and author A. Werwatz, @noop title Nonparametric and semiparametric models, Vol. volume 1 (publisher Springer, year 2004)NoStop [Ragwitz and Kantz(2001)]ragwitz2001indispensable author author M. Ragwitz and author H. Kantz, @noop journal journal Physical Review Letters volume 87, pages 254501 (year 2001)NoStop [Siefert et al.(2003)Siefert, Kittel, Friedrich, and Peinke]siefert2003quantitative author author M. Siefert, author A. Kittel, author R. Friedrich, and author J. Peinke, @noop journal journal EPL (Europhysics Letters) volume 61, pages 466 (year 2003)NoStop [Fuchs et al.(2022)Fuchs, Kharche, Patil, Friedrich, Wächter, and Peinke]fuchs2022open author author A. Fuchs, author S. Kharche, author A. Patil, author J. Friedrich, author M. Wächter, and author J. Peinke, @noop journal journal Physics of Fluids volume 34, pages 101801 (year 2022)NoStop [Davis and Buffett(2022)]davis2022estimation author author W. Davis and author B. Buffett, @noop journal journal Physical Review E volume 106, pages 014140 (year 2022)NoStop [Welford(1962)]welford1962note author author B. Welford, @noop journal journal Technometrics volume 4, pages 419 (year 1962)NoStop [West(1979)]west1979updating author author D. West, @noop journal journal Communications of the ACM volume 22, pages 532 (year 1979)NoStop [Mil'shtejn(1975)]mil1975approximate author author G. Mil'shtejn, @noop journal journal Theory of Probability &; Its Applications volume 19, pages 557 (year 1975)NoStop [Constable and Parker(1988)]constable1988statistics author author C. Constable and author R. Parker, @noop journal journal Journal of Geophysical Research: Solid Earth volume 93, pages 11569 (year 1988)NoStop [Lhuillier et al.(2013)Lhuillier, Hulot, and Gallet]lhuillier2013statistical author author F. Lhuillier, author G. Hulot, and author Y. Gallet, @noop journal journal Physics of the Earth and Planetary Interiors volume 220, pages 19 (year 2013)NoStop [Wicht and Meduri(2016)]wicht2016gaussian author author J. Wicht and author D. G. Meduri, @noop journal journal Physics of the Earth and Planetary Interiors volume 259, pages 45 (year 2016)NoStop [Fuchs et al.(2017)Fuchs, Girard, Peinke, Diribarne, Moro, and Guelker]fuchs2017integral author author A. Fuchs, author A. Girard, author J. Peinke, author P. Diribarne, author J. Moro, and author G. Guelker, in @noop booktitle 16th European Turbulence Conference, 21-24 August, 2017, Stockholm, Sweden (year 2017)NoStop [Peinke et al.(2019)Peinke, Tabar, and Wächter]peinke2019fokker author author J. Peinke, author M. R. Tabar, and author M. Wächter, @noop journal journal Annual Review of Condensed Matter Physics volume 10, pages 107 (year 2019)NoStop [Pèbay(2008)]pebay2008formulas author author P. P. Pèbay, @noop title Formulas for robust, one-pass parallel computation of covariances and arbitrary-order statistical moments., type Tech. Rep. (institution Sandia National Laboratories (SNL), Albuquerque, NM, and Livermore, CA …, year 2008)NoStop [Chan et al.(1982)Chan, Golub, and LeVeque]chan1982updating author author T. F. Chan, author G. H. Golub, and author R. J. LeVeque, in @noop booktitle COMPSTAT 1982 5th Symposium held at Toulouse 1982 (organization Springer, year 1982) pp. pages 30–41NoStop [Schubert and Gertz(2018)]schubert2018numerically author author E. Schubert and author M. Gertz, in @noop booktitle Proceedings of the 30th International Conference on Scientific and Statistical Database Management (year 2018) pp. pages 1–12NoStop
http://arxiv.org/abs/2307.02314v1
20230705141626
Maximum edge colouring problem on graphs that exclude a fixed minor
[ "Zdeněk Dvořák", "Abhiruk Lahiri" ]
cs.DM
[ "cs.DM", "cs.DS", "math.CO", "05C85", "F.2.2" ]
Z. Dvořák and A. Lahiri Charles University, Prague 11800, Czech Republic [email protected] Heinrich Heine University, Düsseldorf 40225, Germany [email protected] Maximum edge colouring problem on graphs that exclude a fixed minorSupported by project 22-17398S (Flows and cycles in graphs on surfaces) of Czech Science Foundation Zdeněk Dvořák1 Abhiruk Lahiri2 Received March 10, 2023; accepted May 12, 2023 ====================================================================================================================================================================== The maximum edge colouring problem considers the maximum colour assignment to edges of a graph under the condition that every vertex has at most a fixed number of distinct coloured edges incident on it. If that fixed number is q we call the colouring a maximum edge q-colouring. The problem models a non-overlapping frequency channel assignment question on wireless networks. The problem has also been studied from a purely combinatorial perspective in the graph theory literature. We study the question when the input graph is sparse. We show the problem remains -hard on 1-apex graphs. We also show that there exists for the problem on minor-free graphs. The is based on a recently developed Baker game technique for proper minor-closed classes, thus avoiding the need to use any involved structural results. This further pushes the Baker game technique beyond the problems expressible in the first-order logic. § INTRODUCTION For a graph G = (V, E), an edge q-colouring of G is a mapping f E(G) →ℤ^+ such that the number of distinct colours incident on any vertex v ∈ V(G) is bounded by q, and the spread of f is the total number of distinct colours it uses. The maximum edge q-chromatic number q(G) of G is the maximum spread of an edge q-colouring of G. A more general notion has been studied in the combinatorics and graph theory communities in the context of extremal problems, called anti-Ramsey number. For given graphs G and H, the anti-Ramsey number (G, H) denotes the maximum number of colours that can be assigned to edges of G so that there does not exist any subgraph isomorphic to H which is rainbow, i.e., all the edges of the subgraph receive distinct colours under the colouring. The maximum edge q-chromatic number of G is clearly equal to (G, K_1, q+1), where K_1, q+1 is a star with q+1 edges. The notion of anti-Ramsey number was introduced by Erdős and Simonovits in 1973 <cit.>. The initial studies focused on determining tight bounds for (G, H). A lot of research has been done on the case when G = K_n, the complete graph, and H is a specific type of a graph (a path, a complete graph, …) <cit.>. For a comprehensive overview of known results in this area, we refer interested readers to <cit.>. Bounds on (K_n, H) where H is a star graph are reported in <cit.>. Gorgol and Lazuka computed the exact value of (K_n,H) when H is K_1,4 with an edge added to it <cit.>. For general graph G, Montellano-Ballesteros studied (G, K_1,q) and reported an upper bound <cit.>. The algorithmic aspects of this problem started gaining attention from researchers around fifteen years ago, due to its application to wireless networks <cit.>. At that time there was a great interest to increase the capacity of wireless mesh networks (which are commonly called wireless broadband nowadays). The solution that became the industry standard is to use multiple channels and transceivers with the ability to simultaneously communicate with many neighbours using multiple radios over the channels <cit.>. Wireless networks based on the IEEE 802.11a/b/g and 802.16 standards are examples of such systems. But, there is a physical bottleneck in deploying this solution. Enabling every wireless node to have multiple radios can possibly create an interface and thus reduce reliability. To circumvent that, there is a limit on the number of channels simultaneously used by any wireless node. In the IEEE 802.11 b/g standard and IEEE 802.11a standard, the numbers of permittable simultaneous channels are 3 and 12, respectively <cit.>. If we model a wireless network as a graph where each wireless node corresponds to a vertex of the graph, then the problem can be formulated as a maximum edge colouring problem. The nonoverlapping channels can be associated with distinct colours. On each vertex of the graph, the number of distinctly coloured edges allowed to be incident on it captures the limit on the number of channels that can be used simultaneously at each wireless node. The question of how many channels can be used simultaneously by a given network translates into the number of colours that can be used in a maximum edge colouring. Devising an efficient algorithm for the maximum edge q-colouring problem is not an easy task. In <cit.>, the problem is reported -hard for every q ≥ 2. The authors further showed that the problem is hard to approximate within a factor of (1+ 1/q) for every q ≥ 2, assuming the unique games conjecture <cit.>. A simple 2-approximation algorithm for the maximum edge 2-colouring problem is reported in <cit.>. The same algorithm from <cit.> has an approximation ratio of 5/3 with the additional assumption that the graph has a perfect matching <cit.>. It is also known that the approximation ratio can be improved to 8/5 if the input graph is assumed to be triangle-free <cit.>. An almost tight analysis of the algorithm is known for the maximum edge q-colouring problem (q ≥ 3) when the input graph satisfies certain degree constraints <cit.>. The q=2 case is also known to be fixed-parameter tractable <cit.>. In spite of several negative theoretical results, the wireless network question continued drawing the attention of researchers due to its relevance in applications. There are several studies focusing on improving approximation under further assumptions on constraints that are meaningful in practical applications <cit.>, <cit.>, <cit.>, <cit.>. This motivates us to study the more general question on a graph class that captures the essence of wireless mesh networks. Typically, disk graphs and unit disk graphs are well-accepted abstract models for wireless networks. But they can capture more complex networks than what a real-life network looks like <cit.>. By definition, both unit disk graphs and disk graphs can have arbitrary size cliques. In a practical arrangement of a wireless mesh network, it is quite unlikely to place too many wireless routers in a small area. In other words, a real-life wireless mesh network can be expected to be fairly sparse and avoid large cliques. In this paper, we focus on a popular special case of sparse networks, those avoiding a fixed graph as a minor. In particular, this includes the graphs that can be made planar by deletion of a bounded number k of vertices (the k-apex graphs). From a purely theoretical perspective, the graphs avoiding a fixed minor are interesting on their own merit. Famously, they admit the structural decomposition devised by Robertson and Seymour <cit.>, but also have many interesting properties that can be shown directly, such as the existence of sublinear separators <cit.> and admitting layered decomposition into pieces of bounded weak diameter <cit.>. They have been also intensively studied from the algorithmic perspective, including the design. Several techniques for this purpose have been developed over the last few decades. The bidimensionality technique bounds the treewidth of the graph in terms of the size of the optimal solution and uses the balanced separators to obtain the approximation factor <cit.>. A completely different approach based on local search is known for unweighted problems <cit.>. Dvořák used thin systems of overlays <cit.> and a generalization of Baker's layering approach <cit.> to obtain es for a wide class of optimization problems expressible in the first-order logic and its variations. §.§ Our results Our contribution is twofold. First, we show that the maximum edge q-colouring problem is -hard on 1-apex graphs. Our approach is similar in spirit to the approximation hardness reduction for the problem on general graphs <cit.>. Secondly, we show that there exists a for the maximum edge q-colouring problem for graphs avoiding a fixed minor. The result uses the Baker game approach devised in <cit.>, avoiding the use of involved structural results. The technique was developed to strengthen and simplify the results of <cit.> giving es for monotone optimization problems expressible in the first-order logic. Our work demonstrates the wider applicability of this technique to problems not falling into this framework. § PRELIMINARIES A graph H is a minor of a graph G if a graph isomorphic to H can be obtained from a subgraph of G by a series of edge contractions. We say that G is H-minor-free if G does not contain H as a minor. A graph is called planar if it can be drawn in the plane without crossings. A graph G is a k-apex graph if there exists a set A⊆ V(G) of size at most k such that G-A is planar. The k-apex graphs are one of the standard examples of graphs avoiding a fixed minor; indeed, they are K_k+5-minor-free. Given a function f assigning colours to edges of a graph G and a vertex v∈ V(G), we write f(v) to denote the set {f(e) :e is adjacent to v}, and f(G)={f(e):e∈ E(G)}. Recall that f is an edge q-colouring of G if and only if |f(v)|≤ q for every v∈ V(G), and the maximum edge q-chromatic number of G is q(G)=max{|f(G)|:f is an edge q-colouring of G}. A matching in a graph G is a set of edges of G where no two are incident with the same vertex. A matching M is maximal if it is not a proper subset of any other matching. Note that a maximal matching is not necessarily the largest possible. Let |G| denote |V(G)|+|E(G)|. For all other definitions related to graphs not defined in this article, we refer readers to any standard graph theory textbook, such as <cit.>. §  FOR MINOR-FREE GRAPHS Roughly speaking, we employ a divide-and-conquer approach to approximate q(G), splitting G into vertex disjoint parts G_1, …, G_m in a suitable way, solving the problem for each part recursively, and combining the solutions. An issue that we need to overcome is that it may be impossible to compose the edge q-colourings, e.g., if an edge (v_1,v_2) joins distinct parts and disjoint sets of q colours are used on the neighbourhoods of v_1 and v_2 already. To overcome this issue, we reserve the colour 0 to be used to join the “boundary” vertices. This motivates the following definition. For a set S of vertices of a graph G, an edge q-colouring f is S-composable if |f(v)∖{0}|≤ q-1 for every v∈ S. Let q(G,S) denote the maximum number of non-zero colours that can be used by an S-composable edge q-colouring of G. Let us remark that G has an S-composable edge q-colouring using any non-negative number k'≤q(G,S) of non-zero colours, as all edges of any colour c≠ 0 can be recoloured to 0. For any graph G, we have q(G)=q(G,∅), and q(G, S)≤q(G) for any S⊆ V(G). We need the following approximation for q(G,S) in terms of the size of a maximal matching, analogous to one for edge 2-colouring given in <cit.>. Let us remark that the S-composable edge q-colouring problem is easy to solve for q=1, since we have to use colour 0 on all edges of each component intersecting S and we can use a distinct colour for all edges of any other component. Consequently, in all further claims, we assume q≥ 2. For any graph G, any S⊆ V(G), any maximal matching M in G, and any q≥ 2, |M| ≤q(G,S) ≤q(G)≤ 2q|M|. We can assign to each edge of M a distinct positive colour and to all other edges (if any) the colour 0, obtaining an S-composable edge 2-colouring using |M| non-zero colours. On the other hand, consider the set X of vertices incident with the edges of M. By the maximality of M, the set X is a vertex cover of G, i.e., each edge of G is incident with a vertex of X, and thus at most q|X|=2q|M| colours can be used by any edge q-colouring of G. In particular, as we show next, the lower bound implies that the S-composable edge q-colouring problem is fixed-parameter tractable when parameterized by the value of the solution (a similar observation on the maximum edge 2-colouring is reported in <cit.>). There exists an algorithm that, given a graph G, a set S⊆ V(G), and integers q≥ 2 and s, in time O_q,s(|G|) returns an S-composable edge q-colouring of G using at least min(q(G,S),s) colours. We can in linear time find a maximal matching M in G. If |M|≥ s, we return the colouring that gives each edge of M a distinct non-zero colour and all other edges colour 0. Otherwise, the set X of vertices incident with M is a vertex cover of G of size at most 2s-2, and thus G has treewidth at most 2s-2. Note also that for any s', there exists a formula φ_s',q in monadic second-order logic such that G,S,E_0,…, E_s'φ_s',q if and only if E_0, …, E_s' is a partition of the edges of G with all parts except possibly for E_0 non-empty such that the function f defined by letting f(e)=i for each i∈{0,…,s'} and e∈ E_i is an S-composable edge q-colouring of G. Therefore, we can find an S-composable edge q-colouring of G with the maximum number s'≤ s of non-zero colours using Courcelle's theorem <cit.> in time O_q,s(|G|). A layering of a graph G is a function λ V(G) →ℤ^+ such that |λ(u) - λ(v)| ≤ 1 for every edge (u, v) ∈ E(G). In other words, the graph is partitioned into layers λ^-1(i) for i ∈ℤ^+ such that edges of G only appear within the layers and between the consecutive layers. Baker <cit.> gave a number of es for planar graphs based on the fact that in a layering of a connected planar graph according to the distance from a fixed vertex, the union of a constant number of consecutive layers induces a subgraph of bounded treewidth. This is not the case for graphs avoiding a fixed minor in general, however, a weaker statement expressed in terms of Baker game holds. We are going to describe that result in more detail in the following subsection. Here, let us state the key observation that makes layering useful for approximating the edge q-chromatic number. For integers r≥ 2 and m such that 0≤ m≤ r-1, the (λ,r,m)-stratification of a graph G is the pair (G',S') such that * G' is obtained from G by deleting all edges uv such that λ(u)≡ m r and λ(v)≡ m+1 r, and * S' is the set of vertices of G incident with the edges of E(G)∖ E(G'). Let G be a graph, S a subset of its vertices, and q,r≥ 2 integers. Let λ be a layering of G. For m∈{0,…,r-1}, let (G_m,S_m) be the (λ,r,m)-stratification of G. * q(G_m,S∪ S_m)≤q(G,S) for every m∈{0,…,r-1}. * There exists m∈{0,…,r-1} such that q(G_m,S∪ S_m)≥(1-6q/r)q(G,S). Given an (S∪ S_m)-composable edge q-colouring of G_m, we can assign the colour 0 to all edges of E(G)∖ E(G_m) and obtain an S-composable edge q-colouring of G using the same number of non-zero colours, which implies that q(G_m, S∪ S_m)≤q(G,S). Conversely, consider an S-composable edge q-colouring f of G using k=q(G,S) non-zero colours. For m∈{0,…,r-1}, let B_m be the bipartite graph with vertex set S_m and edge set E(G)∖ E(G_m) and let M_m be a maximal matching in B_m. Let 𝒫 be a partition of the set {0,…, r-1} into at most three disjoint parts such that none of the parts contains two integers that are consecutive modulo r. For each P∈𝒫, let M_P=⋃_m∈ P M_m, and observe that M_P is a matching in G. By Observation <ref>, it follows that k≥ |M_P|, and thus 3k≥ |P|k≥∑_P∈𝒫 |M_P|=∑_m=0^r-1|M_m|. Hence, we can fix m∈{0,…,r-1} such that |M_m|≤3rk. By Observation <ref>, any edge q-colouring of B_m, and in particular the restriction of f to the edges of B_m, uses at most 2q|M_m|≤6qrk distinct colours. Let f' be the edge q-colouring of G obtained from f by recolouring all edges whose colour appears on the edges of B_m to colour 0. Clearly f' uses at least (1-6qr)k non-zero colours. Moreover, each vertex v∈ S_m is now incident with an edge of colour 0, and thus |f'(v)∖{0}|≤ q-1. Therefore, the restriction of f' to E(G_m) is an (S∪ S_m)-composable edge q-colouring, implying that q(G_m,S∪ S_m)≥(1-6q/r)k=(1-6q/r)q(G,S). Hence, if r≫ q, then a good approximation of q(G_m,S∪ S_m) for all m∈{0,…,r-1} gives a good approximation for q(G,S). We will also need a similar observation for vertex deletion; here we only get an additive approximation in general, but as long as the edge q-chromatic number is large enough, this suffices (and if it is not, we can determine it exactly using Observation <ref>). Let G be a graph, S a set of its vertices, and v a vertex of G. Let S'=(S∖{v})∪ N(v). For any integer q≥ 2, we have q(G,S)≥q(G-v,S')≥q(G,S)-q, and in particular if ε>0 and q(G,S)≥ q/ε, then q(G-v,S')≥ (1-ε)q(G,S). Any S'-composable edge q-colouring of G-v extends to an S-composable edge q-colouring of G by giving all edges incident on v colour 0, implying that q(G,S)≥q(G-v,S'). Conversely, any S-composable edge q-colouring of G can be turned into an S'-composable edge q-colouring of G-v by recolouring all edges whose colour appears on the neighbourhood of v to 0 and restricting it to the edges of G-v. This loses at most q non-zero colours (those appearing on the neighborhood of v), and thus q(G-v,S')≥q(G,S)-q. §.§ Baker game For an infinite sequence 𝚛 = r_1, r_2, … and an integer s ≥ 0, let (𝚛) denote the sequence r_2, r_3, … and let (𝚛) = r_1. Baker game is played by two players and on a pair (G,𝚛), where G is a graph and 𝚛 is a sequence of positive integers. The game stops when V(G) = ∅, and 's objective is to minimise the number of rounds required to make the graph empty. In each round of the game, either * chooses a vertex v∈ V(G), does nothing and the game moves to the state (G ∖{v}, (𝚛)), or * selects a layering λ of G, selects an interval I of (𝚛) consecutive integers and the game moves to the state (G[λ^-1(I)], ()). In other words, selects (𝚛) consecutive layers and the rest of the graph is deleted. wins in k rounds on the state (G, 𝚛) if regardless of 's strategy, the game stops after at most k rounds. As we mentioned earlier 's objective is to minimise the number of rounds of this game and it is known that they will succeed if the game is played on a graph that forbids a fixed minor (the upper bound on the number of rounds depends only on the sequence 𝚛 and the forbidden minor, not on G). For every graph F and every sequence 𝚛 = r_1, r_2, … of positive integers, there exists a positive integer k such that for every graph G avoiding F as a minor, wins Baker game from the state (G,𝚛) in at most k rounds. Moreover, letting n=|V(G)|, there exists an algorithm that preprocesses G in time O_F(n^2) and then in each round determines a move for (leading to winning in at most k rounds in total) in time O_F,𝚛(n). Let us now give the algorithm for approximating the edge q-chromatic number on graphs for which we can quickly win Baker game. There exists an algorithm that, given * a graph G, a set S⊆ V(G), an integer q≥ 2, and * a sequence 𝚛 = r_1, r_2, … of positive integers such that wins Baker game from the state (G,𝚛) in at most k rounds, and in each state that arises in the game is able to determine the move that achieves this in time T, returns an S-composable edge q-colouring of G using at least (∏_i=1^k (1-6qr_i))·q(G,S) non-zero colours, in time O_𝚛,k,q(|G|T). First, we run the algorithm from Observation <ref> with s=⌈ r_1/3⌉. If the obtained colouring uses less than s non-zero colours, it is optimal and we return it. Otherwise, we know that q(G,S)≥ s. In particular, E(G)≠∅, and thus have not won the game yet. Let R=(∏_i=2^k (1-6qr_i)). Let us now consider two cases depending on 's move from the state (G,𝚛). * Suppose that decides to delete a vertex v∈ V(G). We apply the algorithm recursively for the graph G-v, set S'=(S∖{v})∪ N(v), and the sequence (), obtaining an S'-composable edge q-colouring f of G-v using at least R·q(G-v,S') non-zero colours. By Lemma <ref> with ε=qs, we conclude that f uses at least R·q(G-v,S')≥ R(1-ε)q(G,S)≥ R(1-6q/r_1)q(G,S) non-zero colours. We turn f into an S-composable edge q-colouring of G by giving all edges incident on v colour 0 and return it. * Suppose that chooses a layering λ. We now recurse into several subgraphs, each corresponding to a valid move of . For each m∈{0,…, r_1-1}, let (G_m,S_m) be the (λ,r_1,m)-stratification of G_m. Note that G_m is divided into parts G_m,1, …, G_m,t_m, each contained in the union of r_1 consecutive layers of λ. For each m∈{0,…,r_1-1} and each i∈{1,…,t_m}, we apply the algorithm recursively for the graph G_m,i, set S_m,i=(S_m∪ S)∩ V(G_m,i), and the sequence (), obtaining an S_m,i-composable edge q-colouring f_m,i of G_m,i using at least R·q(G_m,i,S_m,i) non-zero colours. Let f_m be the union of the colourings f_m,i for i∈{1,…,t_m} and observe that f_m is an (S∪ S_m)-composable edge q-colouring of G_m using at least R·q(G_m,S∪ S_m) non-zero colours. We choose m∈{0,…, r_1-1} such that f_m uses the largest number of non-zero colours, extend it to an S-composable edge q-colouring of G by giving all edges of E(G)∖ E(G_m) colour 0, and return it. By Lemma <ref>, the colouring uses at least R·q(G_m,S∪ S_m)≥ R(1-6q/r_1)q(G,S) non-zero colours, as required. For the time complexity, note that each vertex and edge of G belongs to at most ∏_i=1^d r_i subgraphs processed at depth d of the recursion, and since the depth of the recursion is bounded by k, the sum of the sizes of the processed subgraphs is O_𝚛,k,q(|G|). Excluding the recursion and time needed to select 's moves, the actions described above can be performed in linear time. Consequently, the total time complexity is O_𝚛,k,q(|G|T). Our main result is then just a simple combination of this lemma with Theorem <ref>. There exists an algorithm that given an F-minor-free graph G and integers q,p≥ 2, returns in time O_F,p,q(|G|^2) an edge q-colouring of G using at least (1-1/p)q(G) colours. Let 𝚛 be the infinite sequence such that r_i = 10pqi^2 for each positive integer i, and let k be the number of rounds in which wins Baker game from the state (G',𝚛) for any F-minor-free graph G', using the strategy given by Theorem <ref>. Note that R =∏_i=1^k (1-6qr_i)≥ 1-∑_i=1^∞6q/r_i =1-3/5p∑_i=1^∞1/i^2=1-3/5p·π^2/6≥ 1-1/p. Let n=|G|. After running the preprocessing algorithm from Theorem <ref>, we apply the algorithm from Lemma <ref> with S=∅ and T=O_F,𝚛(n)=O_F,p,q(n), obtaining an edge q-colouring of G using at least R·q(G,∅)=R·q(G)≥ (1-1/p)q(G) colours, in time O_F,p,q(n^2). § HARDNESS ON 1-APEX GRAPHS In this section, we study the complexity of the maximum edge 2-colouring problem on 1-apex graphs. We present a reduction from which is known to be -hard <cit.>. The incidence graph G(φ) of a Boolean formula φ in conjunctive normal form is the bipartite graph whose vertices are the variables appearing in φ and the clauses of φ, and each variable is adjacent exactly to the clauses in which it appears. A Boolean formula φ in conjunctive normal form is called if * each clause of φ contains at most three distinct literals, * each variable of φ appears in exactly three clauses, * the incidence graph G(φ) is planar. In problem, we ask whether such a formula φ has a satisfying assignment. We follow the strategy used in <cit.>, using an intermediate maximum edge 1,2-colouring problem. The instance of this problem consists of a graph G, a function g V(G) →{1, 2}, and a number t. An edge g-colouring of G is an edge colouring f such that |f(v)|≤ g(v) for each v∈ V(G). The objective is to decide whether there exists an edge g-colouring of G using at least t distinct colours. We show the maximum edge {1,2}-colouring problem is -hard on 1-apex graphs by establishing a reduction from problem. We then use this result to show that the maximum edge q-colouring problem on planar graphs is -hard when q ≥ 2. Let us start by establishing the intermediate result. The maximum edge {1, 2}-colouring problem is -hard even when restricted on the class of 1-apex graphs. Consider a given formula φ with m clauses and n variables and a plane drawing of its incidence graph G(φ). Let the clauses of φ be c_1, …, c_m and the variables x_1, …, x_n; we use the same symbols for the corresponding vertices of G(φ). Let H be a graph obtained from G(φ) as follows. For all j ∈{1,2, …, n}, if the clauses in which x_j appears are c_ℓ_j,1, c_ℓ_j,2, and c_ℓ_j,3, split x_j to three vertices x_j,1, x_j,2, and x_j,3, where x_j,a is adjacent to c_ℓ_j,a for a∈{1,2,3}. For 1≤ a<b≤ 3, add a vertex n_j,a,b and if x_j appears positively in c_ℓ_j,a and negatively in c_ℓ_j,b or vice versa, make it adjacent to x_j,a and x_j,b (otherwise leave it as an isolated vertex). Finally, we add a new vertex u adjacent to c_i for i∈{1,…,m} and to n_j,a,b for j∈{1,…,n} and 1≤ a<b≤ 3. Clearly, H is a 1-apex graph, since H-u is planar. Let us define the function g V(H) →{1, 2} as follows: * g(u) = 1, * g(c_i) = 2 for all i ∈{1, 2, … , m}, * g(x_j,a) = 1 for all j ∈{1, 2, … , n} and a∈{1,2,3}, and * g(n_j,a,b) = 2 for all j ∈{1, 2, … , n} and 1≤ a < b≤ 3. First, we show if there exists a satisfying assignment for the formula φ, then H has an edge g-colouring using n+1 colours. For i∈{1,…, n}, choose a vertex x_j,a adjacent to c_i such that the (positive or negative) literal of c_i containing the variable x_j is true in the assignment, and give colour i to the edge (c_i,x_j,a) and all other edges incident on x_j,a (if any). All other edges receive colour 0. Clearly, u is only incident with edges of colour 0, for each j∈{1,…,n} and a∈{1,…,3} all edges incident on x_j,a have the same colour, and for each i∈{1,…,m}, the edges incident on c_i have colours 0 and i. Finally, consider a vertex n_j,a,b for some j∈{1,…,n} and 1≤ a<b≤ 3 adjacent to x_j,a and x_j,b. By the construction of H, the variable x_j appears positively in c_ℓ_j,a and negatively in c_ℓ_j,b or vice versa, and thus at most one of the corresponding literals is true in the assignment. Hence, n_j,a,b is incident with edges of colour 0 and of at most one of the colours ℓ_j,a and ℓ_j,b. Conversely, suppose that there exists an edge g-colouring f of H using at least m+1 distinct colours, and let us argue that there exists a satisfying assignment for φ. Since g(u)=1, we can without loss of generality assume that each edge incident with u has colour 0. If a colour c≠ 0 is used to colour the edge (n_j,a,b, x_j,k) for some j∈{1,…,n}, 1≤ a <b≤ 3, and k∈{a,b}, then since g(x_j,k) = 1, this colour is also used on the edge (x_j,k, c_ℓ_j,k). Hence, every non-zero colour appears on an edge incident with a clause. Since each clause is also joined to u by an edge of colour 0, it can be only incident with edges of one other colour. Since f uses at least m+1 colours, we can without loss of generality assume that for i∈{1,…,m}, there exists an edge (c_i, x_j,a) for some j∈{1,…,n} and a∈{1,…,3} of colour i. Assign to x_j the truth value that makes the literal of c_i in which it appears true. We only need to argue that this rule does not cause us to assign to x_j both values true and false. If that were the case, then there would exist 1≤ a<b≤ 3 such that the variable x_j appears positively in clause c_ℓ_j,a and negatively in clause c_ℓ_j,b or vice versa, the edge corresponding to the variable x_j,a has colour ℓ_j,a and the edge corresponding to the variable x_j,b has colour ℓ_j,b. However, since g(x_j,a)=g(x_j,b)=1, this would imply that n_j,a,b is incident with the edge (n_j,a,b, x_j,a) of colour ℓ_j,a, the edge (n_j,a,b,x_j,b) of colour ℓ_j,b, and the edge (n_j,a,b, u) of colour 0, which is a contradiction. Therefore, we described how to transform in polynomial time a instance φ to an equivalent instance H, g, t=m+1 of the maximum edge {1,2}-colouring problem. Now we are ready to prove the main theorem of this section. The proof strategy is similar to the -hardness proof in <cit.>. We include the details for completeness. For an arbitrary integer q ≥ 2 the maximum edge q-colouring problem is -hard even when the input instance is restricted to 1-apex graphs. We construct a reduction from the maximum edge {1,2}-colouring problem on 1-apex graphs. Let G, g, t be an instance of this problem, and let n=|V(G)| and r=|{v ∈ V(G) g(v) = 1}|. We create a graph G' from G by adding for each vertex v ∈ V(G) exactly q - g(v) pendant vertices adjacent to v. Clearly, G' is an 1-apex graph. We show that G has an edge g-colouring using at least t distinct colours if and only if G' has an edge q-colouring using at least t + r + (q - 2)n colours. In one direction, given an edge g-colouring of G using at least t colours, we colour each of the added pendant edges using a new colour, obtaining an edge q-colouring of G using at least t + r + (q - 2)n colours. Conversely, let f be an edge q-colouring of G' using at least t + r + (q - 2)n colours. Process the vertices v∈ V(G) one by one, performing for each of them the following operation: For each added pendant vertex u adjacent to v in order, let c' be the colour of the edge (u,v), delete u, and if v is incident with an edge e of colour c≠ c', then recolour all remaining edges of colour c' to c. Note that the number of eliminated colours is bounded by the number r + (q - 2)n of pendant vertices, and thus the resulting colouring still uses at least t colours. Moreover, at each vertex v∈ V(G), we either end up with all edges incident on v having the same colour or we eliminated one colour from the neighbourhood of v for each adjacent pendant vertex; in the latter case, since |f(v)|≤ q and v is adjacent to q-g(v) pendant vertices, at most g(v) colours remain on the edges incident on v. Hence, we indeed obtain an edge g-colouring of G using at least t colours. § FUTURE DIRECTIONS We conclude with some possible directions for future research. The maximum edge 2-colouring problem on 1-apex graphs is -hard. But the complexity of the problem is unknown when the input is restricted to planar graphs. We consider this an interesting question left unanswered. The best-known approximation ratio is known to be 2, without any restriction on the input instances. Whereas, a lower bound of (1+q/q), for q ≥ 2 is known assuming unique games conjecture. There are not many new results reported in the last decade that bridge this gap. We think, even a (2 - ) algorithm, for any ε > 0, will be a huge progress towards that direction. The Baker game technique can yield es for monotone optimization problems beyond problems expressible in the first-order logic. Clearly, the technique can't be extended to the entire class of problems expressible in the monadic second-order logic. It will be interesting to characterise the problems expressible in the monadic second-order logic where the Baker game yield es. § ACKNOWLEDGEMENT The second author likes to thank Benjamin Moore, Jatin Batra, Sandip Banerjee and Siddharth Gupta for helpful discussions on this project. He also likes to thank the organisers of Homonolo for providing a nice and stimulating research environment. splncs04 10 AdamaszekP10 Adamaszek, A., Popa, A.: Approximation and hardness results for the maximum edge q-coloring problem. In: Algorithms and Computation - 21st International Symposium, ISAAC 2010, Jeju Island, Korea, December 15-17, 2010, Proceedings, Part II. Lecture Notes in Computer Science, vol. 6507, pp. 132–143 (2010), <https://doi.org/10.1007/978-3-642-17514-5_12> AdamP Adamaszek, A., Popa, A.: Approximation and hardness results for the maximum edge q-coloring problem. Journal of Discrete Algorithms 38-41,  1–8 (2016), <https://doi.org/10.1016/j.jda.2016.09.003> alon1990separator Alon, N., Seymour, P.D., Thomas, R.: A separator theorem for graphs with an excluded minor and its applications. In: Proceedings of the 22nd Annual ACM Symposium on Theory of Computing, May 13-17, 1990, Baltimore, Maryland, USA. pp. 293–299. ACM (1990), <https://doi.org/10.1145/100216.100254> baker1994approximation Baker, B.S.: Approximation algorithms for NP-complete problems on planar graphs. Journal of ACM 41(1), 153–180 (1994), <https://doi.org/10.1145/174644.174650> CabelloG15 Cabello, S., Gajser, D.: Simple PTAS's for families of graphs excluding a minor. Discrete Applied Mathematics 189, 41–48 (2015), <https://doi.org/10.1016/j.dam.2015.03.004> Chandran Chandran, L.S., Hashim, T., Jacob, D., Mathew, R., Rajendraprasad, D., Singh, N.: New bounds on the anti-ramsey numbers of star graphs. CoRR abs/1810.00624 (2018), <http://arxiv.org/abs/1810.00624> ChandranLS22 Chandran, L.S., Lahiri, A., Singh, N.: Improved approximation for maximum edge colouring problem. Discrete Applied Mathematics 319, 42–52 (2022), <https://doi.org/10.1016/j.dam.2021.05.017> courcelle Courcelle, B.: The monadic second-order logic of graphs. i. recognizable sets of finite graphs. Information and Computation 85(1), 12–75 (1990), <https://doi.org/10.1016/0890-5401(90)90043-H> DawarGKS06 Dawar, A., Grohe, M., Kreutzer, S., Schweikardt, N.: Approximation schemes for first-order definable optimisation problems. In: 21th IEEE Symposium on Logic in Computer Science (LICS 2006), 12-15 August 2006, Seattle, WA, USA, Proceedings. pp. 411–420. IEEE Computer Society (2006), <https://doi.org/10.1109/LICS.2006.13> DemaineH05 Demaine, E.D., Hajiaghayi, M.T.: Bidimensionality: new connections between FPT algorithms and ptass. In: Proceedings of the Sixteenth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2005, Vancouver, British Columbia, Canada, January 23-25, 2005. pp. 590–601. SIAM (2005), <http://dl.acm.org/citation.cfm?id=1070432.1070514> Diestel Diestel, R.: Graph Theory, 4th Edition, Graduate texts in mathematics, vol. 173. Springer (2012) Dvorak18 Dvorák, Z.: Thin graph classes and polynomial-time approximation schemes. In: Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2018, New Orleans, LA, USA, January 7-10, 2018. pp. 1685–1701. SIAM (2018), <https://doi.org/10.1137/1.9781611975031.110> Dvorak20 Dvorák, Z.: Baker game and polynomial-time approximation schemes. In: Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms, SODA 2020, Salt Lake City, UT, USA, January 5-8, 2020. pp. 2227–2240. SIAM (2020), <https://doi.org/10.1137/1.9781611975994.137> Erdos Erdös, P., Simonovits, M., Sós, V.T.: Anti-Ramsey theorems. Infinite and finite sets (Colloquium, Keszthely, 1973; dedicated to P. Erdös on his 60th birthday) 10(II), 633–643 (1975) Feng Feng, W., Zhang, L., Wang, H.: Approximation algorithm for maximum edge coloring. Theoretical Computer Science 410(11), 1022–1029 (2009), <https://doi.org/10.1016/j.tcs.2008.10.035> FominLRS11 Fomin, F.V., Lokshtanov, D., Raman, V., Saurabh, S.: Bidimensionality and EPTAS. In: Proceedings of the Twenty-Second Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2011, San Francisco, California, USA, January 23-25, 2011. pp. 748–759. SIAM (2011), <https://doi.org/10.1137/1.9781611973082.59> FujitaMO Fujita, S., Magnant, C., Ozeki, K.: Rainbow generalizations of Ramsey theory: A survey. Graphs and Combinatorics 26(1), 1–30 (2010), <https://doi.org/10.1007/s00373-010-0891-3> GorgolL10 Gorgol, I., Lazuka, E.: Rainbow numbers for small stars with one edge added. Discussiones Mathematicae Graph Theory 30(4), 555–562 (2010), <https://doi.org/10.7151/dmgt.1513> GoyalKM13 Goyal, P., Kamat, V., Misra, N.: On the parameterized complexity of the maximum edge 2-coloring problem. In: Mathematical Foundations of Computer Science 2013 - 38th International Symposium, MFCS 2013, Klosterneuburg, Austria, August 26-30, 2013. pp. 492–503 (2013), <https://doi.org/10.1007/978-3-642-40313-2_44> Har-PeledQ17 Har-Peled, S., Quanrud, K.: Approximation algorithms for polynomial-expansion and low-density graphs. SIAM Journal of Computing 46(6), 1712–1744 (2017), <https://doi.org/10.1137/16M1079336> Jiang02 Jiang, T.: Edge-colorings with no large polychromatic stars. Graphs and Combinatorics 18(2), 303–308 (2002), <https://doi.org/10.1007/s003730200022> klein1995 Klein, P.N., Plotkin, S.A., Rao, S.: Excluded minors, network decomposition, and multicommodity flow. In: Proceedings of the Twenty-Fifth Annual ACM Symposium on Theory of Computing, May 16-18, 1993, San Diego, CA, USA. pp. 682–690. ACM (1993), <https://doi.org/10.1145/167088.167261> KodialamN05 Kodialam, M.S., Nandagopal, T.: Characterizing the capacity region in multi-radio multi-channel wireless mesh networks. In: Proceedings of the 11th Annual International Conference on Mobile Computing and Networking, MOBICOM 2005, Cologne, Germany, August 28 - September 2, 2005. pp. 73–87. ACM (2005), <https://doi.org/10.1145/1080829.1080837> ManousSTV96 Manoussakis, Y., Spyratos, M., Tuza, Z., Voigt, M.: Minimal colorings for properly colored subgraphs. Graphs and Combinatorics 12(1), 345–360 (1996), <https://doi.org/10.1007/BF01858468> Montel06 Montellano-Ballesteros, J.J.: On totally multicolored stars. Journal of Graph Theory 51(3), 225–243 (2006), <https://doi.org/10.1002/jgt.20140> MontelN02 Montellano-Ballesteros, J.J., Neumann-Lara, V.: An anti-Ramsey theorem. Combinatorica 22(3), 445–449 (2002), <https://doi.org/10.1007/s004930200023> Raniwala Raniwala, A., Chiueh, T.: Architecture and algorithms for an IEEE 802.11-based multi-channel wireless mesh network. In: INFOCOM 2005. 24th Annual Joint Conference of the IEEE Computer and Communications Societies, 13-17 March 2005, Miami, FL, USA. pp. 2223–2234 (2005), <https://doi.org/10.1109/INFCOM.2005.1498497> RobertsonS03a Robertson, N., Seymour, P.D.: Graph minors. XVI. excluding a non-planar graph. Journal of Combinatorial Theory, Ser. B 89(1), 43–76 (2003), <https://doi.org/10.1016/S0095-8956(03)00042-X> Schier04 Schiermeyer, I.: Rainbow numbers for matchings and complete graphs. Discrete Mathematics 286(1-2), 157–162 (2004), <https://doi.org/10.1016/j.disc.2003.11.057> SenMGB07 Sen, A., Murthy, S., Ganguly, S., Bhatnagar, S.: An interference-aware channel assignment scheme for wireless mesh networks. In: Proceedings of IEEE International Conference on Communications, ICC 2007, Glasgow, Scotland, UK, 24-28 June 2007. pp. 3471–3476. IEEE (2007), <https://doi.org/10.1109/ICC.2007.574> Simono Simonovits, M., Sós, V.: On restricted colourings of K_n. Combinatorica 4(1), 101–110 (1984), <https://doi.org/10.1007/BF02579162> Tovey84 Tovey, C.A.: A simplified np-complete satisfiability problem. Discrete Applied Mathematics 8(1), 85–89 (1984), <https://doi.org/10.1016/0166-218X(84)90081-7> WanAJWX15 Wan, P., Al-dhelaan, F., Jia, X., Wang, B., Xing, G.: Maximizing network capacity of mpr-capable wireless networks. In: 2015 IEEE Conference on Computer Communications, INFOCOM 2015, Kowloon, Hong Kong, April 26 - May 1, 2015. pp. 1805–1813. IEEE (2015), <https://doi.org/10.1109/INFOCOM.2015.7218562> WanCWY11 Wan, P., Cheng, Y., Wang, Z., Yao, F.F.: Multiflows in multi-channel multi-radio multihop wireless networks. In: INFOCOM 2011. 30th IEEE International Conference on Computer Communications, 10-15 April 2011, Shanghai, China. pp. 846–854. IEEE (2011), <https://doi.org/10.1109/INFCOM.2011.5935308>
http://arxiv.org/abs/2307.02891v1
20230706095356
BaBE: Enhancing Fairness via Estimation of Latent Explaining Variables
[ "Ruta Binkyte", "Daniele Gorla", "Catuscia Palamidessi" ]
cs.LG
[ "cs.LG", "cs.AI", "cs.CY" ]
The Relationship Between Speech Features Changes When You Get Depressed: Feature Correlations for Improving Speed and Performance of Depression Detection Fuxiang Tao1 Wei Ma1 Xuri Ge1 Anna Esposito2 Alessandro Vinciarelli1 ^1 West University of Timişoara, Bd. V. Pârvan nr. 4, 300223, Timişoara, Romania ^* Corresponding Author: [email protected] ========================================================================================================================================================= We consider the problem of unfair discrimination between two groups and propose a pre-processing method to achieve fairness. Corrective methods like statistical parity usually lead to bad accuracy and do not really achieve fairness in situations where there is a correlation between the sensitive attribute S and the legitimate attribute E (explanatory variable) that should determine the decision. To overcome these drawbacks, other notions of fairness have been proposed, in particular, conditional statistical parity and equal opportunity. However, E is often not directly observable in the data, i.e., it is a latent variable. We may observe some other variable Z representing E, but the problem is that Z may also be affected by S, hence Z itself can be biased. To deal with this problem, we propose BaBE (Bayesian Bias Elimination), an approach based on a combination of Bayes inference and the Expectation-Maximization method, to estimate the most likely value of E for a given Z for each group. The decision can then be based directly on the estimated E. We show, by experiments on synthetic and real data sets, that our approach provides a good level of fairness as well as high accuracy. § INTRODUCTION An increasing number of decisions regarding the daily lives of human beings are relying on machine learning (ML) predictions, and it is, therefore, crucial to ensure that they are not only accurate, but also objective and fair. One of the first group fairness notions proposed in literature was statistical parity (SP) <cit.>, which enforces the probability of a positive prediction to be equal across different groups. Let the prediction and the group be represented, respectively, by the random variables Ŷ and S, both of which are assumed to be binary for simplicity, and let Ŷ=1 stand for the positive prediction. Then SP is formally described by the following formula, where P[·|·] represents conditional probability: P[Ŷ=1|S=1] = P[Ŷ=1|S=0]. However, SP has been criticized for causing loss of accuracy and for ignoring circumstances that could justify disparity. A more refined notion is conditional statistical parity (CSP) <cit.>, which allows some disparity as long as it is legitimated by explaining factors. For example, a hiring decision positively biased towards Group 1 could be justified if Group 1 has a higher education level than Group 0 in average. CSP is formally described as follows, where E is a random variable representing the ensemble of explaining features: P[Ŷ=1|S=1,E=e] = P[Ŷ=1|S=0,E=e] e. The most common pre-processing approach to achieve CSP (or an approximation of it) consists in editing the label Y (decision) in the training data, according to some heuristic, so to ensure that the number of samples with Y=1, S=1, and E=e are approximately the same number as those with Y=1, S=0, and E=e. One problem, however, is that often E is not directly observable in the data, i.e., it is a latent variable. Usually, we can observe some other variable Z that is representative of E, but the problem is that Z may be also influenced by the sensitive attribute S, hence Z itself can be biased. We illustrate this scenario with the following examples. The SAT (Scholastic Assessment Test) is a standardized test widely used for college admissions in the United States. In principle, the SAT score is supposed to indicate the skill level of the applicant, and therefore her potential to succeed in college. However, the performance at the test can be affected by other socio-economic, psychological, and cultural factors. For instance, a recent study <cit.> points out that, on average, black students are less likely to undergo the financial burden of retaking the test than white students. This causes a racial gap in the scores, since retaking the test usually improves the result. Another study <cit.> reports that, on average, girls score approximately 30 points less on SAT than boys, despite the fact that girls routinely achieve higher grades in school. According to <cit.>, the cause is the higher sensitivity to stress and test anxiety among females. Many healthcare systems in the United States rely on prediction algorithms to identify patients in need of assistance. One of the most used indicators is the individual healthcare expenses, as they are easily available in the insurance claim data. However, healthcare spending is influenced not only by the health condition, but also by the socio-economic status. A recent study <cit.> shows that typical algorithms used by these healthcare systems are negatively biased against black patients, in the sense that, for the same prediction score, black patients are in average sicker than the white ones. According to <cit.>, this is due to the bias in the healthcare spending data, since black patients spend less on healthcare due to lower financial capabilities and lower level of trust towards the white-dominated medical system and practitioners. In the above examples, the “true skills” and the “true health status”, respectively, are the legitimate features E (explanation) on which we would like to base the decision. Unfortunately E is not directly observable. What we can observe, instead, is the result of the SAT test and the healthcare-related spending, respectively. These are represented by the variable that we call Z. These indicators, however, do not faithfully represent E, because they are influenced also by other factors, namely the economical status (or the gender), and the race, respectively. These are the sensitive attribute S. The line of research that advocates the use of statistical parity <cit.> adheres to the "We are all equal" <cit.> principle, and makes the basic assumption that E and S are independent. However, in many cases, like for instance in decisions regarding the medical treatment of genetic illnesses, race or gender could have a direct effect on the likeliness of the medical condition. For example, in our second running example, the real health status is on average lower in the black population because of socio-economic factors. Therefore, we allow the possibility of a link between the sensitive attribute S and the explaining value E, and aim to remove the spurious and discriminatory link introduced by the link between S and Z. The method we propose to remove the discrimination works equally well whether or not there is a link between S and E, and it does not modify this relation. To summarize, in the original (unfair) scenario the decision Y is based on Z, which is influenced by both E and S. The situation is represented in Figure  <ref>(a). The arrow from S to Z represents that there is a causal relation between S and Z, and similarly for the other solid arrows[Note that the particular structure between E, S and Z represents what in causality is called a collider.], while the dashed arrow between S and E represent a relation that may or may not be present. In order to take a fair decision, we would like to base the decision Y only on E, but, as explained before, E may not be directly available. Therefore, we need to determine what is the most likely value of E for the given values of S and Z. To this purpose, we will derive the conditional distribution of E given Z and S, i.e. P[E|Z,S]. The objective is illustrated in Figure <ref>(b). The method we propose relies on having at our disposal some additional knowledge, namely an estimation of the conditional distribution of Z given S and E, i.e., P[Z|E,S]. This estimation can be obtained by collecting additional data. For instance, for the case of Example <ref>, we could use the richer set of biomarkers (not readily available otherwise), like it is done in <cit.>. Alternatively, it can be produced by studies or experiments in a controlled environment. For instance, for Example <ref>, we could assess skills in some subjects by in-depth examinations, and derive statistics about their SAT performance both at the first attempt and after a number of retakes. Another example is the study of symptoms (Z) induced by certain diseases (E): P[Z|E,S] can be statistically estimated from medical data collected by hospitals. A further reason for assuming that we dispose of P[Z|E,S] and not of P[E|Z,S] is that the latter depends on the distribution of E, which can vary greatly depending on the geographical area, on the social context, etc. For example, the racial health gap can be different in Europe and in the United States, where the experiments or data collection took place. In contrast, P[Z|E,S] may be more “universal”, so it is convenient to invest in the estimation of the latter, that can be done once and then transferred to different contexts. Indeed, one advantage of our approach is that it allows transferring of causal knowledge. Namely, once we learn the relation P[Z|E,S], the method can be reused with a population with different proportions, i.e. different P[E|S] (but the same P[Z|E,S]). For more discussion about this point we refer to <cit.>. To obtain P[E|Z,S] from P[Z|E,S] we can then apply the conditional Bayes theorem: P[E=e|Z=z,S=s] = P[Z=z|E=e,S=s] · P[E=e|S=s] /P[Z=z|S=s] However, while P[Z=z|S=s] can be estimated from the data at our disposal, the problem is that the prior P[E=e|S=s] is unknown. In order to estimate it, we will use the Expectation-Maximization method (EM)  <cit.>, a powerful statistical technique to estimate latent variables as the maximum likelihood parameters of empirical data observations. Since our method relies on the Bayes theorem, we call it BaBE, for Bayesian Bias Elimination. Once estimated P[E|Z,S], we can pre-process the training data by assigning a decision Y based on the most likely value e of E, for given values of S and Z. If e does not have enough probability mass, however, we may not achieve CSP, or even a good approximation of it. In such case, we can base the decision on a threshold for the estimated E, aiming at achieving equal opportunity (EO) <cit.> instead, that we regard as a relaxation of CSP. Formally: P[Ŷ=1| Y=1,S=1] = P[Ŷ=1|Y=1,S=0], where Y represents the “true decision”, i.e., the decision based on a threshold for the real value of E. We validate our method by performing experiments, both on synthetic datasets and on the real healthcare data provided by <cit.>. In both cases, we obtain a very good estimation of P[E|S], and we achieve a good level of both accuracy and fairness. The contributions of our paper are the following: * We propose an approach to estimate the distribution of a latent explaining variable E, using the Expectation-Maximization method (EM). To the best of our knowledge, this is the first time that EM is used to achieve fairness without assuming the independence between E and the sensitive attribute S. From the above, we then derive an estimation of P[E|Z,S]. * Using the estimation of P[E|Z,S], we show how to to estimate the values of E and Y for each value of Z and S, in order to achieve CSP or EO. * We show experimentally that our proposal outperforms other approaches for fairness, in terms of CSP, EO, accuracy, and other metrics for fairness and precision of the estimations. The software used for implementing our approach and for performing the experiments is available in the (anonymous) public GitHub Repository https://github.com/Babe-Algorithm/BABEhttps://github.com/Babe-Algorithm/BABE. Related Work The notion of fairness that we consider in this work was introduced in <cit.> under the name conditional non-discrimination, and it is known nowadays as conditional statistical parity <cit.>. In <cit.>, fairness is achieved through data pre-processing, by applying local massaging or local preferential sampling techniques to achieve P[Ŷ=1|S=1,E=1] = P[Ŷ=1|S=0,E=e]. However, they consider only an observable explanatory variable E, not a latent E. Furthermore, they assume that E may be correlated with the sensitive attribute S, and that the influence that E has on the decision Ŷ is legitimate. Note that our Z, although observable, cannot be considered as an explanatory variable, because we are assuming it is influenced by the sensitive attribute in a way that would make it unfair to base the decision on Z. To better understand the difference, consider one of the main examples used in <cit.> to illustrate the idea, which is a kind of Berkeley admission anomaly, an instance of the Simpson paradox <cit.>. In this example, the admittance in a certain university looks biased against females, but the disparity can actually be explained by the fact that female students tend to choose a more selective program. In this case, the explanatory variable is a mediator (the choice of the program), and it is assumed to be legitimate as a cause for disparity. In contrast, in our example the observed score is considered to be influenced by social discrimination, hence it cannot be directly used as an explanatory variable. The work closest to ours is <cit.>, where there is a model containing a latent variable whose distribution is discovered through the Expectation Maximization method. However, in <cit.> the notion of fairness considered is statistical parity. Following the notation of our paper, they aim at obtaining P[E=1|S=0] = P[E=1|S=1]. Using this requirement as a constraint (thus applying a sort of self-fulfilling prophecy approach) and other constraints such as the preservation of the total ratio of positive decisions, they determine what the distribution P[Z|E,S] needs to be, they distribute the probability mass uniformly on all attributes, and they finally apply the EM method to determine the fair labels. In contrast, we are aiming at discovering what is the most probable value of E for each combination of values of the other attributes (S and Z), so as to take a fair decision based on E, considered as the explanatory variable. We do not require statistical parity and we do not derive P[Z|E,S] using constraints like statistical parity, nor do we assume a uniform distribution on all attributes. Instead, we use external knowledge as prior knowledge for applying the EM method. Another difference is that they try to optimize the preservation of accuracy with respect to the observed biased labels, whereas we consider accuracy towards the true fair label dependent on E, considered as the actual attribute on which the decision should be made. Similar in spirit to <cit.>, <cit.> tries to discover the latent variable which is maximally informative about the decision, while minimizing the correlation with the sensitive attribute (statistical disparity); this is done via a deep learning technique. Similarly, <cit.> use deep learning latent variable models where they consider latent fair decisions or the sensitive attribute as an additional confounder itself. <cit.> considers latent confounders between the sensitive attribute and the decision; by introducing a counterfactual fairness notion, they retrieve the counterfactual values. <cit.> uses probabilistic circuits to impose statistical parity and to learn a relationship between the latent fair decision and other variables. Finally, <cit.> uses a notion of fairness called disparate impact, which is similar to statistical disparity, except that it is defined as a ratio instead of a difference between the probabilities of positive decisions for each group. Similarly to our work, <cit.> applies a corrective factor to the outcome of the observed variable Z, but their goal is to minimize the disparate impact (within a certain allowed threshold α), which is again in the spirit of minimizing statistical disparity. Also their technique is very different: they consider the distributions on the observed variable Z for each group, and they compute new distributions that minimize the earth movers' distance and achieve the threshold α. Then, they map each value of Z (for each group) on the new distribution so to maintain the percentile. §.§ Plan of the paper Section 2 provides some background about the notions used in the paper. Section 3 explains the assumptions and the components of our approach. Section 4 evaluates our proposal via experiments on synthetic and real data. Section 5 concludes. The technical details and the proofs are in Appendix A. § PRELIMINARIES AND NOTATION Ê, Ŷ and Y notations In this paper, Ê (with generic value ê) represents the estimation of the explanatory variable E. Similarly, Ŷ_Ê (with generic value ŷ) represents the estimation of the decision, based on Ê, rather than the prediction of the model. To put it in context, recall that we are proposing a pre-processing method: ŷ represents the value that we assign as decision in a sample of the training data during the pre-processing phase. The fairness and precision notions are defined with respect to these estimations. We use Y_Z to indicate the biased decision based on Z, and Y_E for the “true” decision based on E. When clear from the context, we may Y instead of Y_E. Maximum-likelihood Estimation and the Expectation-Maximization Framework Let O be a random variable depending on an unknown parameter . Given that we observe O = o, the aim is to find the value of that maximizes the probability of this observation, and that therefore is its best explanation. To this purpose, we use the log-likelihood function L() = log P[O=o | ]. A Maximum-Likelihood Estimation (MLE) of the parameter is then defined as _θ L() (which is the θ that maximizes P[O=o | ], since log is monotone). The Expectation-Maximization (EM) framework <cit.> is a powerful method for computing _θ L(). §.§ Metrics for the quality of estimations The Wasserstein distance This distance is defined between probability distributions on a metric space. Let 𝒳 be a set provided with a distance d, and μ,ν be two discrete probability distributions on 𝒳. The Wasserstein distance between μ and ν is defined as 𝒲 (μ,ν) = min_α∑_x,y∈𝒳α(x,y) d(x,y), where α represents a coupling, i.e., a joint distributions with marginals μ and ν. ∑_y∈𝒳α(x,y) = μ(x) ∑_x∈𝒳α(x,y) = ν(y). Accuracy Let X,Y be two random variables with support 𝒳 and 𝒴 respectively, and joint distribution P[X,Y]. Let f:𝒳→𝒴 be a function that, given x∈𝒳, estimates the corresponding y, and let ŷ be the result, i.e., ŷ=f(x). The accuracy of f is defined as the expected value of 1_ŷ=y, the function that gives 1 if ŷ=y, and 0 otherwise. When the distribution is unknown, the accuracy is estimated empirically via a set of pairs {(x_i,y_i)| i∈ℐ} independently sampled from P[X,Y] (testing set), and is defined as 𝐴𝑐𝑐(Ŷ, Y) = 1/|ℐ| ∑_i∈ℐ1_ŷ_i=y_i, where ŷ_i = f(x_i). Distortion If the variable to be predicted ranges over a metric space, and the metric is important for decision-making, like the case of E in our examples, accuracy is not always the best way to measure the quality of the estimation. Arguably, it is more suitable to use the distortion, i.e., the expected distance between the true value and its estimation. Using the testing set {((z_i,s_i),e_i)| i∈ℐ}, the distortion in the estimation of E is defined as 𝐷𝑖𝑠𝑡(Ê, E) = 1/|ℐ| ∑_i∈ℐ |ê_i-e_i|, where ê_i = f(z_i,s_i). §.§ Metrics for fairness SP, CSP, and EO are rarely achieved, since they require a perfect match. It is therefore useful to quantify the level of (un)fairness, i.e., the difference between the two groups. We will use: Statistical parity difference (SPD) P[Ŷ_Ê=1|S=1] - P[Ŷ_Ê=1|S=0]. Cond. stat. parity difference (CSPD_e) P[Ŷ_Ê=1|E=e,S=1]- P[Ŷ_Ê=1|E=e,S=0]. Expected cond. stat. parity difference (CSPD) ∑_e P[E=e] CSPD_e. Equal opportunity difference (EOD) P[Ŷ_Ê=1|Y_E=1,S=1] - P[Ŷ_Ê=1|Y_E=1,S=0]. § THE BABE METHOD In this section we describe our approach to estimate E and Y_E for correcting the bias in the training data. We start by deriving the estimation P̂ [E|S] of P[E|S], which then allows to derive the estimation of P̂[E|S,Z], from which we derive Ê and Ŷ_Ê. §.§ The Problem We briefly recall the problem, already described in detail in the introduction: we have a data model represented by the causal graph in Figure <ref>, where S is the sensitive attribute representing a social group, E is the latent variable on which we would like to base our decision, and Z is an observed but biased version of E. A decision Y_Z based on Z is usually unfair, hence we would like to estimate the true value of E for each observation and each group, and base our decision directly on that value. To this aim, we need to estimate the distribution P[E|Z,S], and the first step is to estimate the distribution of E for each group, P[E|S]. We accomplish this task by adapting the Expectation-Maximization method to our particular setting, as explained in the next section. §.§ Deriving lg We estimate the unknown parameter P[E|S] as the MLE of a sequence of samples (z̅,s̅) ={(z_1,s_1),…,(z_N,s_N)}, assuming that we know the effect of the bias, i.e., P[Z|E,S]. We denote by φ_s[z,z̅] the empirical probability of Z=z given S=s, i.e., the frequency of z in the samples with S=s. Algorithm <ref> to estimate P[E|S] starts with the uniform distribution, and then computes iteratively at each step t a new estimation P̂[E|S]^(t) from previous one, getting closer and closer to the MLE. The derivation of Algorithm <ref> from the EM method is shown in the additional material. We remark that a similar adaptation of the EM algorithm, for a totally different problem, appears in the privacy literature <cit.> under the name of Iterative Bayesian Update. In that case, the variable to estimate was the original distribution of data obfuscated by a privacy-protection mechanism. The problem in that setting was somewhat simpler, because it involved only two variables (the original distribution and the empirical one on the observed obfuscated data) and there was no conditioning. §.§ Deriving lg Given the data {(z_i,s_i)|i∈[1,… N]}, the conditional distributions P[Z|E,S], and the estimation P̂[E|S], we derive the estimation P̂[E|Z,S] by applying the Bayes formula: P̂[E=e|Z=z,S=s] = P[Z=z|E=e,S=s]P̂[E=e|S=s]/P[Z=z|S=s] §.§ Deriving lg We propose two ways to derive Ŷ_Ê for correcting the samples in the training data, depending on how much probability mass is concentrated on the mode of P̂[E|Z,S]. We denote by τ the threshold for the values of E that qualify for the positive decision. Method 1 Given z and s, if P̂[E|Z=z,S=s] is unimodal and has a large probability mass (say, 50% or more) on its mode, then we can safely set Ê to be that mode. Namely, if max_e P̂[E=e|Z=z,S=s]≥ 0.5 then we set ê = _e P̂[E=e|Z=z,S=s], and we can then use ê directly to set Ŷ_Ê=1 or Ŷ_Ê=0 in those samples with Z=z and S=s, depending (Z=z,S=s): Ê|Z=z,S=s) ≥τ Ŷ|Z=z,S=s) =1, Ŷ|Z=z,S=s) =0. on whether ê≥τ or not, respectively. Our experimental results show that this method gives a good accuracy. Method 2 If P̂[E|Z=z,S=s] is dispersed on several values, so that no value is strongly predominant, then it is impossible to estimate individual values for E with high accuracy. However, we can still accurately estimate Y_E as follows: Let σ_0 = ∑_e<τP̂[E=e|Z=z,S=s] and σ_1= ∑_e≥τP̂[E=e|Z=z,S=s]. If σ_0 < σ_1, then we set Ŷ_Ê=1. Otherwise, Ŷ_Ê=0. Formally: ∑_e≥τP̂[E=e|Z=z,S=s] ≥ 0.5 Ŷ|Z=z,S=s =1, Ŷ|Z=z,S=s =0. § EXPERIMENTS In this section, we test our method BABE on scenarios corresponding to Examples <ref> and <ref>, using synthetic data sets and a real data set respectively, and we compare our results with those achieved by the following well-known pre-processing approaches that aim to satisfy statistical parity. §.§ Other Algorithms for Comparison The first approach we compare with is the disparate impact (DI) remover  <cit.> [For the experiments we use the implementation by <cit.>.]. DI has a parameter λ, which represents the minimum allowed ratio between the probability of success (Ŷ=1) of each group (hence λ=1 corresponds to statistical parity). For the experiments, we use λ=0.8. The second algorithm we compare with ours is the naive Bayes (NB) <cit.> [We use the implementation that was kindly provided by the authors of <cit.>.]. NB also applies the EM method. However, in contrast to our work, NB assumes that E and S are independent, and uses EM to take decisions that optimize the trade-off between SPD and accuracy. §.§ Data and Experiments §.§.§ Synthetic data sets We generate synthetic data sets as follows. First, we generate a data set (multiset) of 20K elements {s_i}_i∈ [1,20K][We use the notation [a,b] to represent the integers from a to b.] representing values for the sensitive variable (group) S, where each s_i is sampled from the Bernoulli distribution ℬ(0.5). This means that the two groups are about even. Then, we set the domain of E to be equal to [0,99], and to each of the elements s_i in the sequence we associate a value e_i for the variable E, sampled from the normal distribution 𝒩(𝑚𝑒𝑎𝑛0, sd) if s_i=0 and from 𝒩(𝑚𝑒𝑎𝑛1, sd) if s_i=1,[To keep the samples in the range of E, we re-sample the values that are lower than 0 or higher than 99. We also discretize them by rounding to the nearest integer.] where the mean 𝑚𝑒𝑎𝑛1 is set to be 60, and the standard deviation sd is set to be 30. The value of 𝑚𝑒𝑎𝑛0, on the other hand, varies through the experiments from 40 to 80. Varying 𝑚𝑒𝑎𝑛0 will allow us to test how our method behaves when E is independent from S or not. Finally, to each pair (s_i,e_i) we associate a value z_i for Z by applying a bias to e_i with a certain probability. More precisely, z_i = e_i + (𝑏𝑖𝑎𝑠× e_i), where 𝑏𝑖𝑎𝑠 is sampled from 𝒩(-0.2, 0.05) if s_i=0 and from 𝒩(0.2, 0.05) if s_i=1. The threshold for the decision is E = 60. Namely, Y=1 if E> 60 and Y=0 otherwise. §.§.§ Application of BaBE Once the data are generated, we use a random portion of them (50%) to derive P[Z|E,S] and P[E|S], which we consider as the “true” distributions. Then we take another portion of data (40%) randomly selected from the unused ones, we remove the E values from them, and we use them to compute the empirical distribution P[Z|S] and to produce, by applying our BaBE method, the estimates P̂[E|S] and P̂[E|Z,S]. We verify that these satisfy the conditions for Method 1 of Section <ref>, and we apply this method to set the values of Ê and Ŷ_Ê for each sample. We evaluate various metrics for the precision of the estimations and fairness, and compare the performance of BaBE with disparate impact remover (DI), and with naive Bayes (NB) The results are illustrated in Figures <ref> through <ref>. The boxplots are obtained by repeating the experiments ten times with the same parameters. Additional experiments, with different parameters, are in the supplementary material. §.§.§ Real Healthcare Data Set We use the privatized data[The data set synthesized by the athors of <cit.> to mimic the real data used in the study. The data set and the documentation can be found in the GitHub repository <https://gitlab.com/labsysmed/dissecting-bias>] published by <cit.>. In particular, we use only three variables from the data set corresponding to Race, which is our S, Healthcare Spendings, which is our Z, and the Total Number of Chronic Diseases, which is our E. We slightly modify the data as follows: (1) the total number of Chronic Diseases per patient is truncated at 9, and (2) Healthcare Spending is discretized into 10 bins based on the percentiles. We apply BaBE, and set Ŷ_Ê using Method 2 in Section <ref> (Method 1 is not applicable as in this case the distribution of Ê is quite widespread.) The results are shown in Figure <ref>. §.§ Discussion Our experiments with synthetic data show very good results in the precision of the estimation of P[E] (Figure <ref>) and P[E|Z,S] (cf. supplementary material), as well as the distortion of Ê (Figure <ref>) and the accuracy of Ŷ_Ê (Figure <ref>). Also the results for fairness are very good, particularly for the notions for which BaBE is designed, i.e., CSPD_e (Figure <ref>), EOD (Figure <ref>), and CSPD (cf. supplementary material). Surprisingly, in this experiment, BaBE outperforms DI and NB also in SPD. This is odd because DI and NB are designed for this metric (while BaBE is not). Actually, BaBE performs as expected while DI and NB perform badly. We think that this is because these parameters clash with the constraints of DI and NB. In any case, in other experiments, DI and NB outperform BaBE in SPD, as expected (cf. supplementary material). BaBE gives the same good performance for CSPD, EO, and accuracy also when P[E|S] is different from that of the data in which P[Z|E,S] has been computed (cf. supplementary material), which shows that BaBE is compatible with the transfer of causal knowledge to populations with different distributions. On the contrary, CSPD, EO, and accuracy for algorithms like DI and NB highly depends on the distribution, as they always try to impose equality. On the real data, BaBE correctly estimates that the black population is more likely to have a higher number of chronic diseases. As for the decision of admittance to the Health Programme: if the positive decision were based on a threshold on the Healthcare Spending, e.g., that Z should be 1 or more, then the proportion of black patients (with respect to the total number of patients admitted in the programme) would be only 11% (Figure <ref>). We get a similar result if we set the threshold to be at least 2 (Figure <ref>). In contrast, if we base the decision on the estimated P̂[E|Z,S], the proportion is much higher: 60% when the threshold is at least one disease (Figure <ref>), and even 75% when it is two or more (Figure <ref>), which are close to the true proportions (ground truth, based on the true number of diseases per person). Notably, our first result is close to the estimation obtained by <cit.> using counterfactual methodology, which was 59% [This data are from the official repository <https://gitlab.com/labsysmed/dissecting-bias/-/blob/master/README.md>, and were corrected with respect to the original paper.]. Note that applying SP in this scenario would still result in severe discrimination against black patients, who on average have more health issues than white ones. It is important to mention that the performance of BaBE is dependent on the invertiblility of P[Z|E,S=s] (seen as stochastic matrix, aka bias matrix), because invertibility is necessary for the uniqueness of the MLE (cf. supplementary material). Even when the matrix is not invertible, however, we are able to obtain favorable results. Indeed, in all our experiments the bias matrices we produce from the synthetic data are not invertible, to mimic the more realistic scenarios. Preliminary experiments show that the deterministic diagonal matrix produces maximum precision for the estimation of the distributions P[E|S], and maximum accuracy of the prediction Ŷ_Ê. We leave a more systematic study on how precision and accuracy depend on P[Z|E,S=s] as a topic for future work. § CONCLUSIONS AND FUTURE WORK We have proposed a framework to use knowledge of a biasing mechanism from domain-specific studies to perform data pre-processing, aiming at achieving conditional statistical parity when the explaining variable is latent. The BaBE algorithm uses the bias mechanism to estimate the probability distributions P[E|Z,S], where Z is the biased version of E, and S is the sensitive attribute. It is to be noted that the algorithm is able to estimate P[E|Z,S] with high accuracy, even when the prior distributions P[E|S] are different from the ones in which the study of the bias was conducted. One challenging direction for future work is to explore how the precision of the estimation, the accuracy of the prediction, and the fairness level depend on the form of the matrices P̂[E|Z,S], and how the latter depends on the matrices representing the external knowledge (i.e., the bias mechanism) P̂[Z|E,S]. Moreover, we plan to collaborate with domain experts to define formal conditions for measuring the P̂[Z|E,S] matrices accurately. We trust our method to serve as a tool to enhance interdisciplinary collaboration between domain experts and ML Fairness practitioners. § ACKNOWLEDGEMENTS The project is funded by the https//project.inria.fr/hypatia/ERC grant Hypatia under the European Union’s Horizon 2020 research and innovation program. Grant agreement No 835294. We are greatful to Toon Calders and Sicco Verwer for providing their code for the experiments. abbrv § SUPPLEMENTARY MATERIAL § DERIVATION OF BABE AS AN INSTANCE OF THE EM METHOD Let E, Z and S be random variables on ℰ, 𝒵 and 𝒮, with generic elements e,z and s respectively. Let (z̅,s̅) = {(z_i,s_i) | i = [1,…,N]} be a sequence of samples from the joint distribution P[Z,S], let z̅_s { z_i : i ∈{1,...,N}∧ s_i = s } be the subsequence of z̅ of elements paired with s in the samples and let M be |z̅_s|. Then, the empirical probability of Z=z given S=s (i.e., the frequency of z in the samples with S=s) is defined as: φ_s[z, z̅_s] | { z_i ∈z̅_s : z_i = z } | / M Now, given (z̅,s̅), s ∈𝒮, φ_s[z,z̅_s] and the conditional distribution P[Z|E,S], we want to estimate the (unknown) P[E|S] by applying the Expectation-Maximization (EM) method, i.e., by finding the probability distribution on ℰ that maximizes the probability of observing z̅_s given s (and that therefore is the best explanation of what we have observed). More precisely, we want to prove that our algorithm yields a Maximum Likelihood Estimation (MLE) P̂[E|S] that approximates P[E|S]. To this end, let Θ denote the set of all distributions on ℰ conditioned on S=s, and let θ range over it. The log-likelihood function for z̅_s is L_z̅_s : Θ→𝐑 such that L_z̅_s(θ) log P[Z̅_s=z̅_s|θ] where Z̅_s denotes a sequence of M random samples drawn from 𝒵 when S=s. Given z̅_s, a MLE of the unknown P[E|S] is then defined as _θ L_z̅_s(θ), i.e., as the θ that maximizes L_z̅_s(θ) (and therefore P[Z̅_s=z̅_s|θ], since log is monotone). We now show how to adapt the EM framework to the above setting. We start by defining the function Q(θ,θ') [logθ̅ | Z̅_s=z̅_s, S=s, θ'] where θ̅ denotes the probability distribution on sequences e̅ = e_1, e_2, … , e_M of i.i.d. events all with probability distribution θ. The above expectation is taken on all ℰ and conditioned on Z̅_s=z̅_s, S=s, and assuming θ' as a prior approximation of P[E|S]. The function Q has the nice property that L_z̅_s(θ)- L_z̅_s(θ')≥ Q(θ,θ')- Q(θ',θ'). Hence, in order to improve the approximation of the MLE, i.e., to find an estimation θ that improves the estimation θ', it is sufficient to compute Q(θ,θ') and find the θ that maximizes it. Q(θ,θ')=∑_i=1^M∑_e ∈ℰP[Z_s=z_i|E=e,S=s] θ'[e|s]/∑_e' ∈ℰP[Z_s=z_i|E=e',S=s] θ'[e'|s]logθ[e|s]. Given that the E_is are i.i.d., by definition and linearity of conditional expectation, we have that: [logθ̅ | Z̅_s=z̅_s, S=s, θ'] = .[log∏_i=1^Mθ[e_i|s] | Z̅_s=z̅_s, S=s, θ' ] = .[∑_i=1^Mlogθ[e_i|s] | Z̅_s=z̅_s, S=s, θ' ] = ∑_i=1^M [logθ[e_i|s] | Z̅_s=z̅_s, S=s, θ'] = ∑_i=1^M∑_e ∈ℰ P[E=e|Z_s=z_i,S=s] logθ[e|s] where P[E|Z_s,S] is a probability based on the estimation θ' of the unknown P[E|S]. By taking the marginal distribution, we have that P[Z_s=z_i|S=s] = ∑_e' ∈ℰ P[Z_s=z_i,E=e'|S=s] = ∑_e' ∈ℰ P[Z_s=z_i|E=e',S=s] θ'[e'|s] By the conditional Bayes theorem and (<ref>), we have that P[E=e|Z_s=z_i,S=s] = P[Z_s=z_i|E=e,S=s] θ'[e|s]/P[Z_s=z_i|S=s] = P[Z_s=z_i|E=e,S=s] θ'[e|s]/∑_e' ∈ℰ P[Z_s=z_i|E=e',S=s] θ'[e'|s] By plugging the latter equality into (<ref>), we conclude the proof. The next Lemma tells us that P̂[E|S]^(t+1) (as defined in Algorithm 1) is the distribution that maximizes Q( · , P̂[E|S]^(t)). This fact will allow us to conclude that the algorithm approximates the MLE _θ L_z̅_s(θ). The θ that maximizes Q( · , θ') is such that, for every e ∈ℰ: θ[e|s] = ∑_z ∈𝒵φ_s[z, z̅_s]P[Z_s=z|E=e,S=s] θ'[e|s]/∑_e'∈ℰP[Z_s=z|E=e',S=s] θ'[e'|s]. By the method of Lagrangian multipliers, we can find the θ that maximizes Q(θ, θ') by adding to the latter the term λ(∑_e ∈ℰθ[e|s]-1), for some λ, and study the function F(θ, θ') ≜ Q(θ, θ') + λ(∑_e∈ℰθ[e|s]-1) that has the same stationary points as Q(θ, θ') since ∑_e ∈ℰθ[e|s] = 1, being θ a probability distribution on ℰ given S=s. To find the stationary points of F, we impose that all its partial derivatives, including the one w.r.t. λ, are equal to 0. For the latter one, we require that ∂ F/∂λ = ∑_e∈ℰθ[e|s]-1=0 and this trivially holds. For the former ones, by relying on Lemma <ref>, we impose that, for every e ∈ℰ, ∂ F/∂θ[e|s] = 1/θ[e|s]∑_i=1^MP[Z_s=z_i|E=e,S=s] θ'[e|s]/∑_e'∈ℰP[Z_s=z_i|E=e',S=s] θ'[e'|s]+ λ = 0 By multiplying the last equality by θ[e|s], we get: λθ[e|s] = - ∑_i=1^MP[Z_s=z_i|E=e,S=s] θ'[e|s]/∑_e'∈ℰP[Z_s=z_i|E=e',S=s] θ'[e'|s] By summing both sides of (<ref>) on all ℰ, we obtain: λ∑_e ∈ℰθ[e|s] = - ∑_e ∈ℰ∑_i=1^MP[Z_s=z_i|E=e,S=s] θ'[e|s]/∑_e' ∈ℰP[Z_s=z_i|E=e',S=s] θ'[e'|s] = - ∑_e ∈ℰ∑_z ∈𝒵φ_s[z, z̅_s] M P[Z_s=z|E=e,S=s] θ'[e|s]/∑_e' ∈ℰP[Z_s=z|E=e',S=s] θ'[e'|s] = -M ∑_z ∈𝒵φ_s[z, z̅_s] ∑_e ∈ℰP[Z_s=z|E=e,S=s] θ'[e|s]/∑_e' ∈ℰP[Z_s=z|E=e',S=s] θ'[e'|s] = -M ∑_z ∈𝒵φ_s[z, z̅_s] = -M where the last step holds because of (<ref>), and (<ref>) holds because, again by (<ref>), we have that ∑_i=1^M f(z_i) = ∑_z ∈𝒵φ_s[z, z̅_s] M f(z) for any function f. Hence, since θ is a probability distribution on ℰ, we obtain that (<ref>) is satisfied by taking λ = -M. Therefore, by isolating θ[e|s] from (<ref>) and by using (<ref>), we can conclude that, for every e ∈ℰ, we have that θ[e|s] = - 1/λ∑_i=1^MP[Z_s=z_i|E=e,S=s] θ'[e|s]/∑_e'∈ℰP[Z_s=z_i|E=e',S=s] θ'[e'|s] = 1/M∑_i=1^MP[Z_s=z_i|E=e,S=s] θ'[e|s]/∑_e'∈ℰP[Z_s=z_i|E=e',S=s] θ'[e'|s] = ∑_z ∈𝒵φ_s[z, z̅_s] P[Z_s=z|E=e,S=s] θ'[e|s]/∑_e'∈ℰP[Z_s=z|E=e',S=s] θ'[e'|s] Now, for the given s∈𝒮, we define the sequence { P̂[E|S=s]^(t) }_t≥ 0 as follows: P̂[E|S=s]^(0) P̂[E|S=s]^(t+1) _θ Q(θ, P̂[E|S=s]^(t)) The next theorem states the key property of our algorithm, i.e. that { P̂[E|S=s]^(t) }_t≥ 0 tends to the MLE _θ L_z̅_s(θ). The proof of the theorem follows from the fact that Q(θ , θ') has continuous derivatives in both its arguments, and from Theorem 4.3 in <cit.> (which is a reformulation of a result due to Wu <cit.>). t →∞limP̂[E|S=s]^(t) = θ L_z̅_s( θ ). Furthermore, if P[Z|E,S], seen as a stochastic matrix, is invertible, then the MLE θ L_z̅_s( θ ) is unique. The proof follows from Theorem 4 in <cit.>. § ADDITIONAL EXPERIMENTS ON SYNTHETIC DATA §.§ Synthetic data sets 1 r0.3 < g r a p h i c s > The distributions P[E|S=1] (orange) and P[Z|S=1] (magenta) This first group of data sets are the one used for the experiments shown in the body of the paper. We recall that the sequence of values s_i for the binary sensitive variable (group) S are generated by sampling from the Bernoulli distribution ℬ(0.5). The domain of E is set to be equal to [0,99], and to each of the elements s_i in the data set we associate a value e_i for the variable E, sampled from the normal distribution 𝒩(𝑚𝑒𝑎𝑛1, sd) if s_i=1 and from 𝒩(𝑚𝑒𝑎𝑛0, sd) if s_i=0. The mean 𝑚𝑒𝑎𝑛1 is set to be 60, while the value of 𝑚𝑒𝑎𝑛0 varies through the experiments from 40 to 80. The standard deviation sd is set to be 30. We keep the samples in the range of E by re-sampling the values that are lower than 0 or higher than 99. We also discretize them by rounding to the nearest integer. Finally, to each pair (s_i,e_i) we associate a value z_i for Z by applying a bias to e_i with a certain probability. More precisely, z_i = e_i + (𝑏𝑖𝑎𝑠× e_i), where 𝑏𝑖𝑎𝑠 is sampled from 𝒩(-0.2, 0.05) (negative bias) if s_i=0 and from 𝒩(0.2, 0.05) (positive bias) if s_i=1. The distributions of E and Z in the data set, for each group S=1 and S=0, are shown in Figures <ref> and <ref> respectively. We now apply our BaBE method to estimate the distributions P[E|S=1] and P[E|S=0]. The corresponding estimates P̂[E|S=1] and P̂[E|S=0] are shown in Figures <ref> and <ref>, respectively. As we can see, all the estimates are very close to the original distributions. We now apply the DI method to modify the values of E in the data sets, and from the resulting data sets we compute (by counting the frequencies) the distributions of the modified E for each group. The corresponding distributions are shown in Figures <ref> and <ref>, respectively. Note that the new distributions are not very close to the original ones, but, on the other hand, estimating the true E is not the goal of DI. Rather, DI aims at bounding the ratio between the E estimated from Z for each group, thus reducing the statistical parity difference. In this particular case, DI achieves the goal by changing the distribution of E especially for group 1. For this reason, even though there is only one data set for the group 1, we get 5 different distributions of the modified E, one for each value of 𝑚𝑒𝑎𝑛0. The difference between the original and the estimated distributions are measured using the Wasserstein distance. The results are shown in the following figures, where the boxplots are obtained by repeating the experiments 10 times (with different sampling from the same original distributions). We now compute by BaBE the empirical distributions P̂[E|Z,S]. We verify that these satisfy the conditions for Method 1 of Section <ref>, and we apply this method to set the values of Ê and Ŷ_Ê for each sample. Based on this, we compute various metrics for precision and fairness, and compare them with the results obtained with the methods DI and NB. We also compare them with the prediction based on Z, namely Ŷ_Ẑ. We recall that the threshold for the decision is E = 60. Namely, Y_E=1 if E> 60 and Y_E=0 otherwise. The threshold is the same for Z, i.e., Y_Z=1 if Z> 60 and Y_Z=0 otherwise. §.§ Synthetic data sets 2 In this group of experiments, we show that BaBE is compatible with the transfer of causal knowledge to populations with different distributions. To this purpose, we estimate the distributions P[Z|E,S] from synthetic data generated as in the first group of experiments, with 𝑚𝑒𝑎𝑛0=60 and we apply BaBE to different populations whit 𝑚𝑒𝑎𝑛0 varying from 40 to 80. The percentage of S in these populations also changes: we have set 60% of S=1 and 40% , of S=0. § ADDITIONAL STATISTICS AND EXPERIMENTS ON THE REAL DATA
http://arxiv.org/abs/2307.01645v1
20230704105752
In-Domain Self-Supervised Learning Can Lead to Improvements in Remote Sensing Image Classification
[ "Ivica Dimitrovski", "Ivan Kitanovski", "Nikola Simidjievski", "Dragi Kocev" ]
cs.CV
[ "cs.CV" ]
In-Domain Self-Supervised Learning Can Lead to Improvements in Remote Sensing Image Classification Ivica Dimitrovski ^1,2, Ivan Kitanovski ^1,2, Nikola Simidjievski ^1,3,4, Dragi Kocev ^1,3 ^1Bias Variance Labs, Slovenia ^2Faculty of Computer Science and Engineering, University 'Ss. Cyril and Methodius', N. Macedonia ^3Jozef Stefan Institute, Slovenia ^4University of Cambridge, United Kingdom August 1, 2023 ====================================================================================================================================================================================================================================================================================================================================================================================== Self-supervised learning (SSL) has emerged as a promising approach for remote sensing image classification due to its ability to leverage large amounts of unlabeled data. In contrast to traditional supervised learning, SSL aims to learn representations of data without the need for explicit labels. This is achieved by formulating auxiliary tasks that can be used to create pseudo-labels for the unlabeled data and learn pre-trained models. The pre-trained models can then be fine-tuned on downstream tasks such as remote sensing image scene classification. The paper analyzes the effectiveness of SSL pre-training using Million AID - a large unlabeled remote sensing dataset on various remote sensing image scene classification datasets as downstream tasks. More specifically, we evaluate the effectiveness of SSL pre-training using the iBOT framework coupled with Vision transformers (ViT) in contrast to supervised pre-training of ViT using the ImageNet dataset. The comprehensive experimental work across 14 datasets with diverse properties reveals that in-domain SSL leads to improved predictive performance of models compared to the supervised counterparts. Keywords: Remote Sensing, Self-Supervised Learning, Earth Observation, Deep Learning § INTRODUCTION The growth in remote sensing data availability is only matched by the growth and developments in the area of artificial intelligence (AI), and in particular in the field of computer vision. More specifically, recent trends in deep learning have led to a new era of image analysis and raised the predictive performance bar in many application domains, including remote sensing and Earth Observation (EO) <cit.>. Deep models can leverage these large amounts of data, in turn, achieve state-of-the-art performance in various EO tasks (incl. land use/land cover classification, crop type prediction) using large quantities of labeled (ground truth) remote sensing data (typically hundreds of thousands of labeled images). However, for many highly relevant tasks (e.g., archaeological sites identification, pasture grazing, fertilization in agriculture), there is lack of data which is (sufficiently) large, publicly available, and more importantly - labeled. This is a limiting factor in the further uptake and use of AI for remote sensed data. On the one hand, this is expected and understandable, considering that large-scale dataset annotation is an expensive, tedious, time-consuming, and largely manual process. On the other hand, such scenarios call for methods that can learn and represent the (visual) information given in the images without needing labeled examples. Such methods are developed using the SSL paradigm <cit.>. SSL provides a novel approach to the challenge of data labeling by leveraging the power of unlabeled data without the added overhead of manual annotation. In the SSL paradigm, a model is learned in two stages: self-supervised training is first performed on a large unlabeled dataset, and then the trained model is transferred to specific downstream tasks with limited amount of labeled data. In the context of remote sensing, the downstream tasks, also known as target tasks can be scene classification, semantic segmentation, representation learning and object detection <cit.>. The main idea behind SSL is to create auxiliary pretext task for the model from the input data itself. While solving the auxiliary task, the model learns useful representations and relevant features of the input data and its underlying structure that can be used for downstream tasks. There are various types of pretext tasks such as augmentation of the input data and then training a network to learn the applied augmentation, for example rotation, color removal, repositioning/removing patches in the images. In this way, the learned representations can help to bridge the gap in performance between balanced and imbalanced datasets <cit.>. In this work, we investigate the following research hypothesis: Whether in-domain self-supervised learning can lead to improvements in the performance of remote sensing image scene classification models. The downstream task is thus image scene classification. In a typical scenario, working with large scale EO images, this task addresses classifying smaller image (patches) extracted from the much larger remote sensing image. The extracted images can be then annotated with a single or multiple annotations based on the content using explicit semantic classes (eg. forests, residential area, rivers etc.). Given an image as an input, the model needs to output a single or multiple semantic labels, denoting land-use and/or land-cover (LULC) classes present in that image. The research hypothesis is tested through a series of experiments across 14 remote sensing image datasets from the downstream task. As an architecture for SSL pre-training we use the iBOT framework <cit.> with Vision Transformer (ViT) <cit.> as a backbone model. Such architecture has not been explored in the context of SSL from remote sensing imagery. The SSL paradigm has been previously exploited in the context of remote sensing image scene classification. For example in <cit.>, a multi-augmentation contrastive learning method (SeCo) is proposed to take into account the seasonal changes in the satellite data. In <cit.>, a large-scale unlabeled dataset for SSL in Earth observation (SSL4EO-S12) is presented and evaluated using several SSL algorithms, such as MoCo <cit.>, DINO <cit.> and MAE <cit.>. In <cit.>, a rotated varied-size window attention method is explored to advance the performance of plain ViT. Using MAE as a SSL pre-training method, the proposed approach is evaluated on few downstream tasks for remote sensing image scene classification. In <cit.>, a large-scale multitask dataset (Satlas) for benchmarking and improving remote sensing image understanding models is presented and analyzed using MoCo and SeCo as SSL methods. Whilst it is evident that SSL for remote sensing images is getting increased attention by the researchers and practitioners, there are several limitations preventing even more increased uptake by the community. Our study aims to address the limitations in the existing body of work by making the following contributions: * A comprehensive experimental evaluation across a large number of diverse datasets – we use 14 remote sensing image scene classification datasets with different number of images (ranging from ∼2K to ∼500K), number of labels per image (from 2 to 60), different spatial resolution, size and format, and varying label distribution (perfectly balanced datasets as well as highly imbalanced datasets); * Clearly defined protocols for standardization of the experimental work and reproduction of the results; and * Standardization of the splits of the data into train, validation and test parts thus making the comparison of the results from different studies easier. In the reminder, we first explain in more details the iBOT self-supervised method and introduce the remote sensing image scene classification as a downstream task to evaluate the performance of the pre-trained models. Next, we explain the experimental design of the study, including the implementation details and settings. Lastly, we present and discuss the experimental results and provide guidelines for further work. § METHODS AND MATERIALS The computer vision research community has developed a variety of SSL methods based on different deep learning architectures, including autoencoders, generative adversarial networks, contrastive methods based on negative sampling, knowledge distillation and redundancy reduction. A recent overview of SSL methods in remote sensing <cit.> reports that the iBOT method <cit.> achieves highest top-1 accuracy on the ImageNet dataset – this is why we selected this method to test our hypothesis. §.§ Self-supervised pre-training using iBOT The iBOT (image BERT pre-training with Online Tokenizer) model architecture <cit.> has exhibited exceptional performance in SSL tasks <cit.>, establishing it as a state-of-the-art solution. Inspired by Masked Language Models (MLM) <cit.>, which involve randomly masking and reconstructing a set of input tokens, the iBOT model has become the de facto standard in language-related tasks. At the core of MLM lies the lingual tokenizer, responsible for segmenting sentences into semantically meaningful tokens. Building upon the success of MLM, the iBOT model extends its effectiveness to vision tasks through Masked Image Modeling (MIM), offering improved training pipelines for Vision Transformers. However, a significant challenge arises in visual tasks due to the absence of statistical analysis of word frequency as seen in language tasks. In this regard, MIM aims to design a visual tokenizer capable of accommodating the continuous nature of images. MIM is based on the concept of knowledge distillation, and extracts knowledge from the tokenizer and engages in self-distillation with a twin teacher acting as an online tokenizer. In this setup, the target network receives the masked image as input, while the online tokenizer processes the original (unchanged) image. The objective of this approach is for the target network to learn the process of correctly recovering each masked patch token, aligning it with the corresponding output from the tokenizer. §.§ Unlabeled dataset for the self-supervised pretext task At the core of the definition of the self-supervised pretext task lies the selection of the unlabeled data used to learn the image representations. In our study, we utilize the Million Aerial Image Dataset (MillionAID) <cit.> dataset with remote sensing scene images, which adheres to the highly desirable principles of benchmark datasets – diversity, richness and scalability. Essentially, 95 categories organized into a hierarchical scene category network were used as search queries sent to Google Earth to obtain the images. The resulting dataset comprises 1,000,848 non-overlapping remotes sensing scenes in RGB format. RGB is the typical format used as input to the deep learning models in the context of remote sensing image scene classification. The size of images in the dataset varies from 110×110 to 31,672×31,672 pixels due to the fact that they were captured using different types of sensors. For learning of the pre-trained model on the pretext task, we use the MillionAID images without the labels/queries used to construct it. The learned model can then be applied to different downstream tasks. §.§ Downstream tasks The performance evaluation of SSL methods involves assessing their effectiveness on downstream or target tasks. The pre-trained model obtained through the self-supervised setup from the pretext task is transferred to these specific tasks. Transfer learning facilitates the utilization of learned representations from models pre-trained on significantly larger image datasets, enabling downstream models to benefit from this knowledge. As a result, these models often exhibit improved generalization power while requiring fewer training data and iterations. This advantage is particularly valuable for tasks that originate from smaller datasets. Based on the number of semantic labels assigned to the images in the datasets, image scene classification tasks can be further divided into multi-class (MCC) and multi-label (MLC) classification. In multi-class classification, each image is associated with a single class or label from a predefined set. The objective is to predict one and only one class for each image in the dataset. On the other hand, in multi-label classification, images can be associated with multiple labels from a predefined set based on the available information. The goal is to predict the complete set of labels for each image in the dataset. We use 9 MCC datasets and 5 MLC datasets to benchmark and evaluate the performance of the self-supervised models in different contexts. The MCC datasets include Eurosat, UC Merced, Aerial Image Dataset (AID), RSSCN7, Siri-Whu, Resisc45, CLRS, RSD46-WHU, and Optimal31. The MLC datasets include AID (mlc), UC Merced (mlc), MLRSNet, Planet UAS, and BigEarthNet (the version with 19 labels). More detailed description of these datasets is given in <cit.>. § EXPERIMENTAL SETUP The research hypothesis posed in Section <ref> was tested by comparing the performance of two types of pre-trained models: i) model learned by self-supervised iBOT using the unlabeled MillionAID data and ii) model learned by supervised Vision transformer (ViT) <cit.> using the labeled ImageNET-1k dataset (version V1) <cit.>. Figure <ref> illustrates the experimental pipelines. After the training of both models (self-supervised iBOT and supervised ViT), they are transferred via fine-tuning to the specific downstream tasks (labeled remote sensing image scene classification datasets). We next present the transfer learning strategies evaluated in this work as well as the specific implementation details and settings. §.§ Transfer learning strategies With transfer learning the learned models are applied on the downstream tasks. Two strategies for transfer learning are being used: (1) Linear probing (or linear classification) by updating the model weights only for the last, classifier layer or (2) Fine-tuning the model weights of all layers in the network. The former approach, retains the values of all but the last layer's weights of the model from the pre-training, keeping them 'frozen' during the training on the downstream task. The latter, on the other hand, allows the weights to change throughout the entire network during fine-tuning on the downstream task. In juxtaposition with these strategies, we evaluate ViT models trained from scratch (i.e., initialized with random weights) on the downstream tasks. §.§ Implementation details and settings For the downstream tasks, we use the train, validation, and test splits of the image datasets provided within the AiTLAS: Benchmark Arena – an open-source EO benchmark suite available at <https://github.com/biasvariancelabs/aitlas-arena>. We use the train splits for model training, while the validation splits are employed for parameter selection and search. We apply early stopping on the validation split to prevent overfitting, saving the best checkpoint or model based on the lowest validation loss. The best model is then evaluated on the test split to estimate its predictive performance. =-1 To enhance the model's robustness, we incorporate data augmentation techniques during training. The process involves resizing all images to 256×256 pixels, followed by random crops of size 224×224 pixels. Additionally, random horizontal and/or vertical flips are applied. During the evaluation of predictive performance, when applying the model to test data, the images are first resized to 256×256 pixels and then subjected to a central crop of size 224×224 pixels. We believe that these augmentation techniques contribute to better generalization of our models for a given dataset. Regarding the parametrization of the deep architectures, we use the ViT-Base model with an input size of 224×224 and a patch resolution of 16×16 pixels (ViT-B/16). For our experiments, we use the pre-trained model ViT-B/16 trained in a supervised manner on the ImageNet-1K dataset (version V1) from the repository <cit.>. We pre-train iBOT with ViT-B/16 as the backbone for 400 epochs using the MillionAID dataset to obtain the self-supervised model. During training, we follow the parameter settings suggested in the literature (cf. <cit.>). To begin with, we evaluate the performance of the models learned using different values of learning rate: 0.01-0.00001. Next, we use ReduceLROnPlateau as a learning scheduler, reducing the learning rate when the loss has stopped improving. Models often benefit from reducing the learning rate by a factor once learning stagnates for a certain number of epochs (denoted as `patience'). In our experiments, we track the value of the validation loss, with patience set to 5 and reduction factor set to 0.1 (the new learning rate will thus be lr * factor). The maximum number of epochs is set to 100. We also apply early stop criteria if no improvements in the validation loss are observed over 10 epochs. We use fixed values for some hyperparameters, such as batch size, which was set to 128. We employ RAdam optimizer without weight decay <cit.> for optimization. RAdam is a variant of the standard Adam, which employs a mechanism that rectifies the variance from the adaptive learning rate. This, in turn, allows for an automated warm-up tailored to the particular dataset at hand. =-1 Regarding the evaluation measures for predictive performance, for the multi-class classification tasks, we report top-1 accuracy measure, referred to as' Accuracy', and for multi-label classification tasks, we report macro averaged mean average precision (mAP). All models were trained on NVIDIA A100-PCIe GPUs with 40 GB of memory running CUDA version 11.5. We configure and run the experiments using the AiTLAS toolbox <cit.> (available at <https: aitlas.bvlabs.ai>). All configuration files for each experiment and the trained models are available in our repository. =-1 § RESULTS Table <ref> shows the predictive performance of the evaluated methods across both MCC and MLC datasets. More precisely, it lists three groups of results: i) training from scratch with random initialization of the model weights, ii) transfer learning using linear probing of the pre-trained models (supervised on the ImageNet dataset, self-supervised on the MillionAID dataset), and iii) transfer learning using fine-tuning of the pre-trained models (supervised on the ImageNet dataset, self-supervised on the MillionAID dataset). A close inspection of the results reveals the following findings. First of all, the best performing method across all MCC and MLC datasets is the iBOT model obtained in SSL manner (performances given in boldface typesetting). Second, using pre-trained models clearly helps to improve the predictive performance in both transfer learning strategies. On average, the improvement in performance of pre-trained models over learning from scratch is the largest using fine-tuned SSL models (∼16.15% for MCC datasets and ∼9.54% for MLC datasets). The improvement is larger with the fine tuning strategy compared to the linear probing strategy. We report the following notable improvements in predictive performance: For the MCC datasets improvements over 15% are reported for the Optimal31, UC Merced, AID, CLRS, and RESISC45 datasets, and for the MLC datasets improvements over 9% are reported for the AID (mlc), UC Merced (mlc), and MLRSnet datasets. Note that training from scratch gives better results than transfer learning with linear probing only for the BigEarthNet dataset, which is not the case for the fine tuning strategy. This is somewhat expected because of the number of images available (500K images in total) for the model and providing it with sufficient information to train a model from scratch with a very good performance. Third, the SSL models consistently outperform the supervised models across all datasets and both transfer learning strategies. These improvements are to a smaller extent as compared to the improvements compared to the models learned from scratch. The largest differences are observed when using linear probing as a transfer learning strategy: The average improvement of SSL models over the supervised models across the MCC datasets is ∼ 3.43% and across the MLC datasets is ∼2.53%. The largest differences (larger than 3%) here are observed for the MCC datasets: Optimal31, SIRI-WHU, CLRS and RESISC45, and for the MLC datasets: AID (mlc) and MLRSnet. The differences in performance between SSL and supervised models is smaller when fine tuning is used as a transfer learning strategy (on average less than 1%). We have further analysed the results by focusing on the performance across the individual labels. The analysis revealed that the largest differences in performance between the SSL and supervised models are often observed for the labels with limited availability of images. This behavior is especially pronounced in the context of MLC datasets. To further inspect this behavior, we generated activation maps using GradCAM <cit.> for the fine-tuned models focusing on selected labels from the MLC datasets. These maps show that the SSL models correctly identify regions relevant to the true label assigned to the image. Moreover, the identified regions from the SSL models are more focused on the specific objects of interest as compared to the identified regions from the supervised models. This suggests that SSL models possess a stronger capability to discern and highlight the specific objects associated with the labels, leading to improved performance in challenging scenarios. =-1 § CONCLUSION We have presented a comprehensive study on the use of self-supervised models in the analysis of remote sensing imagery. The study provides clear protocols for standardization of the experimental work and reproduction of the results and surpasses the preexisting work along several dimensions: number of datasets, diversity in the datasets (number of images, labels, spatial resolution, and label distribution), and machine learning tasks considered (MCC and MLC). We have executed extensive experiments using the iBOT framework for SSL pre-trained using the MillionAID dataset. Its performance was compared against supervised models learned from scratch and using transfer learning. The presented results from the experiments provide strong evidence that SSL with in-domain training leads to improved predictive performance compared to the supervised models across the 14 datasets used in the study. We summarize the main findings as follows: * the overall best performing model is the SSL iBOT model obtained with fune-tuning; * using pre-trained models clearly helps to improve the predictive performance in both transfer learning strategies; * the SSL models consistently outperform the supervised models across all datasets and both transfer learning strategies; and * SSL models correctly identify focused regions relevant for the true label assigned to an image. The immediate line of further work considers studying and understanding the performance of the SSL models in relation to the different dataset properties as well as to the semantic meaning of the labels. This entails designing a variety of ablation studies, generation of artificial datasets, and exploration of different deep architectures for SSL. § ACKNOWLEDGMENT We acknowledge the support of the European Space Agency ESA through the activity AiTLAS - Artificial Intelligence toolbox for Earth Observation (ESA RFP/3-16371/19/I-NB) awarded to Bias Variance Labs, d.o.o.. plain
http://arxiv.org/abs/2307.02540v1
20230705180002
3D Ising CFT and Exact Diagonalization on Icosahedron
[ "Bing-Xin Lao", "Slava Rychkov" ]
hep-th
[ "hep-th", "cond-mat.stat-mech", "cond-mat.str-el" ]
A comprehensive optical search for pre-explosion outbursts from the quiescent progenitor of SN 2023ixf [ August 1, 2023 ====================================================================================================== § INTRODUCTION Any d-dimensional conformal field theory (CFT) on × S^d-1 is Weyl-equivalent to the same theory on ℝ^d. For d=2, this leads to a very efficient method to compute CFT data by diagonalizing critical spin chain Hamiltonians or critical transfer matrices on finely discretized S^1 <cit.>.[See <cit.>, Chapter 3, for a review and many more references.] For d=3 this has been very little studied until recently, because S^2 is harder to discretize than S^1, and rotation invariance is hard to recover <cit.>. Recently, Ref. <cit.> considered a Hamiltonian on S^2 realizing a quantum phase transition in the (2+1)D Ising universality class, which preserves exact rotation invariance. Already for small systems N≤ 24, where N is the number of electrons on the sphere, results were obtain for CFT operator dimensions <cit.>, operator product expansion (OPE) coefficients <cit.>, and the four-point functions <cit.> which compare well with the conformal bootstrap <cit.>, up to small deviations associated to finite N effects. Why does the model of <cit.> work so well? One possibility is that the model is somehow very special (beyond the fact that it realizes exact rotation invariance). Another possibility is that it is the 3D Ising CFT which is special, and any model approximating it will do so rather well. The latter is favored by the sparsity of the low-lying spectrum of the 3D Ising CFT. In the _2 even scalar sector, after the relevant ϵ of dimension Δ_ϵ≈ 1.41 and the leading irrelevant ϵ' of dimension Δ_ϵ'≈ 3.83 the next irrelevant operator has a rather high dimension Δ_ϵ”≈ 6.89 <cit.>. Given that <cit.> performs a double tuning of the model's parameters, a possible scenario is that the operators ϵ and ϵ' are both tuned away, while the corrections associated with ϵ” are expected to scale as 1/N^α with a high power α=1/2(Δ_ϵ”-3), and may be small even at moderate N. These considerations can be more constructively formulated as the following Finite N effects in the spectrum on S^2 of the model of <cit.> can be understood by an effective theory, perturbing the CFT Hamiltonian by integrals of local CFT operators times small couplings. In this paper we will show that a similar conjecture works even for a much simpler model without rotation invariance - the transverse field Ising model (TFIM) on the icosahedron. Preliminary results of this work were reported in <cit.>. The Hamiltonian of the model is given by H_TFIM = -J ∑_⟨ij ⟩ σ^z_i σ^z_j- h ∑_i σ^x_i , where i runs over 12 icosahedron vertices, and ⟨ ij ⟩ over 30 icosahedron edges. The icosahedron is chosen because its spatial symmetry group is the largest irreducible subgroup of O(3). Thus the Hamiltonian (<ref>) is “as close as one can get” to rotation invariance via naive discretization on a regular grid.[Adding more points on the icosahedron faces <cit.> does not increase the symmetry of the model.] Ref. <cit.> achieves exact rotation invariance via an alternative smart construction of “fuzzy sphere regularization” which we will not use here. Our main point here will be that, even with a broken rotation invariance and in a very small model with only 12 spins on the sphere, we will still be able to understand deviations of the exact diagonalization spectra from CFT via an effective theory. Once the above conjecture is verified for the icosahedron model, it should be plausible that a similar procedure will work for the more sophisticated model of <cit.>. This will be demonstrated in <cit.>. Until now, the finite N effects in the model of <cit.> were minimized by tuning and by increasing N. Applying effective theory on top of this tuning should allow a significant increase in the accuracy of CFT data extraction, as we discuss in the conclusions. § EXACT DIAGONALIZATION Hamiltonian (<ref>) commutes with the global _2 spin flip generated by ∏_i σ_i^x. It is also invariant under the spatial icosahedral symmetry I_h ≅ A_5×_2^O(3). We imagine the icosahedron inscribed into the unit sphere centered at the origin. Then A_5⊂ SO(3) while the spatial _2^O(3) acts as x→ -x. Finally, the Hamiltonian is real and thus has time reversal symmetry T acting as complex conjugation. The Hilbert space of model (<ref>) has dimension 4096, and the spectrum is easy to compute via exact diagonalization, see Fig. <ref>. Let us describe some of its salient features. All eigenstates are either _2 odd (blue) or _2 even (red) with respect to the global _2 spin flip generated by ∏_i σ_i^x. All eigenvalues have exact degeneracy 1,3,4, or 5, which are the dimensions of irreducible representations of A_5, see App. <ref>. It is known that the TFIM on an infinite plane, i.e. on an infinite regular (say, cubic) planar lattice shows a quantum phase transition in the (2+1)D Ising universality class at a critical value h=h_c.[See e.g. <cit.>, Chapter 5, for an introduction, and <cit.> for the critical magnetic field determination.] This phase transition separates the phase of the spontaneously broken _2 symmetry at h<h_c, where the ground state, in infinite volume, is doubly degenerate, and the preserved _2 symmetry at h>h_c. On a finite lattice of size L× L, the energy gaps Ê_i=E_i-E_0 approach zero at h=h_c as ∼ L^-1 for L→∞. In particular the lightest _2 even state gap has a characteristic dip close to h=h_c, Fig. <ref>. 2022 The Authors. Reprinted under CC-BY 4.0 license. Some of these features are visible in the icosahedron spectrum in Fig. <ref>. In particular, we see that the lightest _2 odd state is almost degenerate with the vacuum for h≲ 3 and the lightest _2 even gap shows a dip around the same position, around h∼ 3. §.§ Adjusting the speed of light We will define h_c in our model as the point where the icosahedron spectrum is the closest to the 3D Ising CFT spectrum. One could have guessed that h_c∼ 3 from the dip in the lightest _2 even gap, but as we will now see the true h_c is quite a bit higher. This should not be surprising - the dip is very shallow, because we only have 12 points on the sphere. The 3D CFT energy levels on × S^2 are in one-to-one correspondence with the scaling dimensions of CFT operators: E_i^ CFT = Δ_i. This equation has to be translated to our exact diagonalization context as: αÊ_i = Δ_i+… , with some constant α independent of i, which reflects arbitrary normalization of the Hamiltonian. Alternatively, one can think of α as the choice of units of time (or of energy) which is necessary to restore the local 3D isotropy of × S^2, inherited from ^3 via Weyl transformation. In condensed matter literature α^-1 is referred to as the “speed of light” parameter of a quantum critical point. The terms …in (<ref>) stand for correction terms coming from the perturbations of the CFT Hamiltonian, which will be discussed below. Ignoring these terms for now and taking the ratio of, say, the first _2 odd and _2 even energy levels, which should be related to 3D Ising CFT operators σ and ϵ, the speed of light cancels, and we get Ê_1/Ê_2 ≈Δ_σ/Δ_ϵ≈0.367 (h=h_c) , where in the second equation we used the values of Δ_ϵ and Δ_σ from the conformal bootstrap, see App. <ref>. From this equation we find h_c≈ 4.5. Let us test this prediction with more states. Working in a window h∈ [3.6,4.8] we consider the first two levels of degeneracy 1 and 3, in _2=± sectors. These levels should correspond to operators σ, ∂_μσ, ϵ, ∂_μϵ, of dimensions Δ_σ, Δ_σ+1, Δ_ϵ, Δ_ϵ+1. We perform, for each h, a fit for α minimizing the quantity δ= ∑_i(αÊ_i - Δ_i)^2 . where i runs over these 4 states. The result is shown in Fig. <ref>, where horizontal dotted lines show the four exact CFT energy levels, and solid curves show αÊ_i for the best fit value of h. We see that the rescaled exact diagonalization gaps are close to the CFT energy levels, with the best agreement at around h∼ 4.2, somewhat below the above h_c estimate 4.5. However, the agreement is clearly far from perfect. For example the correct ordering between the energy levels corresponding to ∂_μσ and ϵ is not very well reproduced by the numerical data. We will now describe a more sophisticated theoretical effective model, which will lead to a much better agreement. § PERTURBATIONS OF THE CFT HAMILTONIAN We consider the 3D Ising CFT on × S^2. Recall that the CFT Hilbert space on S^2 is in one-to-one correspondence with the local CFT operators , and we will label them as |⟩. Here can be a primary operator, or a derivative of a primary operator (descendant). The CFT Hamiltonian H_ CFT is diagonal in this basis, with eigenvalue the scaling dimension: H_CFT|⟩=Δ_|⟩ . To describe the spectrum of the TFIM on icosahedron, we will consider an effective Hamiltonian which will be the perturbation of the CFT Hamiltonian: H=H_CFT+δH . The CFT Hamiltonian preserves global _2, spatial O(3), as well as time reversal. The perturbation δ H has to preserve symmetries of H_ TFIM: global _2, spatial I_h, and time reversal. We will construct δ H by integrating local operators of the 3D Ising CFT over the sphere. Let g(y) be a real function on the sphere. Then our δ H will be a sum of terms having the form: ∫_S^2 g(y) (0,y), where (0,y) is a local operator located at the τ=0 slice of the cylinder × S^2 and y is a coordinate on S^2 (see Fig. <ref>). Here and below all integrals over S^2 are with the standard uniform metric. Note that H_ CFT itself can also be written in this form, as the integral of the stress tensor component T^ττ. We will choose to be a primary operator, i.e. without derivatives. Derivatives along the sphere can be integrated by parts. Derivatives in the τ direction give δ H which have zero diagonal matrix elements. In this paper we will only do first-order perturbation theory so only diagonal matrix elements will be of interest. To preserve global _2, we will consider _2-even operators . In case of scalar , further symmetry requirements are as follows. To preserve spatial I_h, the coupling function g(y) will have to be I_h invariant. We assume that has even intrinsic spatial parity, as is the case for all low-lying primary operators of the 3D Ising CFT. Then (<ref>) preserves time reversal. The 3D Ising CFT also contains low-lying symmetric traceless primaries of even spin ℓ, and can be one of those. In that case g(y)(0,y) in (<ref>) should be understood as g_μ_1…μ_ℓ(y)_μ_1…μ_ℓ(0,y). Indices μ_i can point along the sphere and along the τ direction. Time reversal τ→ -τ requires that g_μ_1…μ_ℓ(y) vanishes whenever an odd number of indices equals τ. Furthermore, using tracelessness of , we may assume without loss of generality that g_μ_1…μ_ℓ(y) is nonvanishing only if all indices μ_i are along the sphere. To preserve spatial I_h, this tensor function has to be I_h covariant. In the next sections we will see how adding perturbations of the form (<ref>) we will be able to achieve better and better description of the exact diagonalization spectrum of the TFIM on the icosahedron. § NUMERICAL TESTS §.§ Adding ϵ perturbation On physical grounds, we expect that the most important perturbation is that of =ϵ, which is the only relevant _2 even scalar primary. We are interested in the effect of this perturbation on the CFT energy levels. We will assume that the coupling function g(y) is small and apply the first-order Hamiltonian perturbation theory: δE_ψ= ⟨ψ| δH |ψ⟩/⟨ψ|ψ⟩ . We will be considering states |ψ⟩ related to the primary operators |⟩ and to their descendants. Mapping the cylinder × S^2 to ^3, the necessary matrix elements can be related to the three-point (3pt) functions where is inserted at 0 and ∞ and is integrated over the unit sphere. These computations are carried out in App. <ref>. Here we will just describe the general features and present the needed results. The function g(y) will typically enter the answer only through its overall integral over the sphere g_:= ∫_S^2 g(y) , as an overall proportionality factor. Furthermore, the matrix element for |ψ⟩=|⟩ will be proportional to the OPE coefficient f_, while the matrix elements for |ψ⟩ descendants of |⟩ - to the same OPE coefficient times a factor depending on Δ_ and Δ_, which is fixed by conformal invariance. When perturbation is a primary scalar, and is also a scalar, we have (see (<ref>), (<ref>)): δE_ = g_f_ , δE_∂ = g_f_ A_,, A_,= 1 + _ (_ - 3)/6 _ . We now proceed to the numerical check. We add the correction =ϵ to the CFT Hamiltonian. We consider the same 4 energy gaps as in Section <ref>, corresponding to σ, ∂σ, ϵ, ∂ϵ. We test equations (<ref>) for these states, replacing … by the correction terms δ E_i given in (<ref>). These corrections can all be evaluated in terms of the scaling dimensions Δ_σ, Δ_ϵ and the OPE coefficients f_σϵσ, f_σϵσ, which are all known, see App. <ref>, times an effective coupling g_ϵ which we treat as a free parameter. We perform the fit by minimizing the following quantity over α and g_ϵ: δ= ∑_i (αÊ_i- Δ_i -δE_i)^2 . The result of this exercise are shown in Fig. <ref>. Compared to Fig. <ref>, the fit works much better. The corrected energy levels are almost constant with h in the shown window, while in Fig. <ref> the levels had a linear “drift”. This drift is now corrected away, for all 4 states, by adding the effect of the coupling g_ϵ, which itself depends almost linearly on h. Remarkably, g_ϵ crosses 0 around h=h_c≈ 4.15. It is positive at h>h_c and negative at h<h_c. This is in full agreement with the expectation that the critical point should have g_ϵ =0, while positive/negative g_ϵ should correspond to the phase of preserved/broken _2 invariance. §.§ Adding ϵ' perturbation To we fix the discrepancy between the numerical data and the CFT spectrum in Fig. <ref>, let us try to add the next scalar perturbation, which is the leading irrelevant scalar ϵ', Δ_ϵ'≈ 3.83. While g_ϵ varies linearly and crosses zero at the critical point, the new effective coupling g_ϵ' is expected to be essentially constant in h and therefore has a chance to fix the discrepancy in Fig. <ref> which is nearly h independent. For this new test we consider the same 4 states as in Fig. <ref>. We introduce corrections δ E_i which are the sums of the corrections (<ref>) for =ϵ,ϵ' with independent couplings g_ϵ, g_ϵ'. The OPE coefficients f_ϵ' are known (App. <ref>) and the new correction can be evaluated. We minimize (<ref>) over α,g_ϵ, g_ϵ'. The results are shown in Fig. <ref>. We see in that figure that indeed the coupling g_ϵ' is almost constant. Although the fit quality is improved by about 30%, it remains imperfect. We will see now see how to solve the remaining discrepancy by adding the spin-4 perturbation C_μνλσ. §.§ Adding C_μνλσ perturbation In this section we will add yet another perturbation to the CFT Hamiltonian. The total number of parameters in the fit will become 4. To make the game interesting, we need to consider more than 4 energy levels σ,σ,ϵ,ϵ we considered so far. In this section we will consider energy levels corresponding to all descendants of σ,ϵ up to and including level 2. In CFT, these energy levels are Δ_+k with =σ,ϵ and k=0,1,2. However in the exact diagonalization we expect that the level 2 descendants split as 1+5, corresponding to the dimensions of irreducible O(3) representations, which also remain irreducible under I_h. Thus we have the total of 8 energy levels. Which new perturbation should we consider next? One possibility is ϵ” but this operator has very high dimension 6.8956(43) <cit.> so we judge its importance unlikely. What about adding the stress tensor T_μν? In an exactly rotationally invariant setting, this perturbation is unimportant because it just rescales the radius of the sphere or, equivalently, rescales the units of time. This effect is already taken care of in our scheme via the speed of light parameter α. Although our model is not rotationally invariant but only I_h invariant, perturbations of scalar descendants feel this difference only starting from k=3. This is related to the fact that the first non-isotropic invariant tensor of the I_h group has six indices. Since here we are dealing with k≤ 2, we do not have to consider T_μν perturbations. See App. <ref> for a more detailed discussion. We are thus led to consider the perturbation related to the spin-4 primary operator C_μνλσ, of dimension 5.022665(28). In Fig. <ref> we show the results of the test where we fit the 8 above-mentioned energy levels. The cost function is (<ref>) where i=1,…,8 and * Ê_1,…,Ê_4 are the energy gaps corresponding to σ,ϵ,σ,ϵ used in the previous plots. * Ê_5, Ê_6 are the energy gaps corresponding to σ. These are the _2-odd levels having multiplicity 1 and 5 in Fig. <ref>, located above the _2-odd multiplicity-3 level of σ. * Ê_7, Ê_8 are the energy gaps corresponding to ϵ. These are the _2-even levels having degeneracy 1 and 5. There is some discrete choice to be made in assigning these levels, as they have to be distinguished from the multiplicity-1 level of ϵ' and the multiplicity-5 level of T_μν which are also _2-even and have a closeby scaling dimension. We chose the only assignment which leads to a satisfactory fit. The correction terms δ E_i are now the sums of three terms, with independent couplings g_ϵ,g_ϵ', g_C. Corrections due to C are given in App. <ref>. Results of the fit with only 2 couplings g_ϵ,g_ϵ' are also shown in Fig. <ref> for comparison. We see that the fit is outstandingly good with 3 couplings and α, while it is much worse with 2 couplings and α (not surprisingly because of the conclusions in Section <ref>. It is especially remarkable that with 3 couplings and α we are able to reconcile the second level descendants of σ and ϵ with the CFT data. In Fig. <ref>, we see that the 1+5 components of the CFT states σ and ϵ have a significant splitting in the exact diagonalization spectrum. Our correction terms are able to account for this splitting very well. The correction term is especially large for ^2σ, due to σ being close to the unitarity bound, see the discussion in App. (<ref>). §.§ Tests for T_μν and ϵ' levels Two more interesting energy levels to consider are the stress tensor T≡ T_μν and ϵ'. These are _2 even and have multiplicity 5 and 1, respectively. The corresponding exact diagonalization levels could be confused with the ϵ levels which are split as 1+5, but we identified those in the previous section, so consequently we can now identify T_μν and ϵ', see the labels in Fig. <ref>. We would like to test the formula αÊ_i = Δ_i +δE_i for these two levels. For this test we take δ E_ϵ' the sum of the ϵ and ϵ' corrections and δ E_T as the ϵ correction. We take α and the couplings g_ϵ,g_ϵ' from Fig. <ref>(b). The result of this test is shown in Fig. <ref>. The dashed lines show αÊ_i with α from Fig. <ref>(b), i.e. just the speed of light rescaling. This agrees poorly with the CFT levels. The solid lines show αÊ_i -δ E_i. This agrees much better with the CFT levels. The agreement is still imperfect, but the deviation is almost constant in h. This gives us hope that this deviation may be fixed by the corrections due to C and, in case of T, due to ϵ'. Indeed, coupling g_C and g_ϵ' are almost constant in h in the considered range, see Fig. <ref>(b). These corrections may not be evaluated at present since the corresponding OPE coefficients are not known. §.§ Level-3 descendants It was already almost a miracle that we managed to reproduce the level 2 descendants σ and ϵ, due to a significant splitting between the 1+5 components. In this section, let us see what happens for level 3. Level-3 descendants of a primary scalar split as 3+3'+4, see App. <ref>. The corresponding components for =σ,ϵ are identified in Fig. <ref>. We see that the splittings between the different components are large, especially for σ. Can we reproduce them? As shown in App. <ref>, the corrections for the component 3 of are sensitive to the same couplings g_ϵ, g_ϵ', g_C determined in Section <ref> by fitting the descendants of level k≤ 2. On the other hand the corrections for 3' and 4 take the form δE^(4)_∂ = δE^(7)_ + a , δE^(3')_∂ = δE^(7)_ - 4/3 a , where δ E^(7) depends on g_ϵ, g_ϵ', g_C, while the splitting a depends on extra couplings g̃_ϵ, g̃_ϵ', g̃_C measure the deviation of the corresponding coupling functions g(y) from constants, consistently with I_h invariance. Given that the splitting effect depends on three new couplings, it would be easy to fix those couplings to reproduce the splitting Ê^(4) - Ê^(3'), both for σ and ϵ. There is no great value in this observation, but we have checked that this could be done reasonably well even using just one coupling g̃_ϵ. It is more interesting to look at a new averaged variable Ê^(7) =4/7 Ê^(4) + 3/7 Ê^(3') , which is not sensitive to to the new couplings g̃_ϵ, g̃_ϵ', g̃_C, at least to the first order as we are using here. In Fig. <ref> we do the following test. For =σ,ϵ, we plot αÊ_i -δ E_i for the gaps corresponding to the component 3 of and for the linear combinations (<ref>) of the 3' and 4 components, which should not be sensitive to the splitting effect. The values of α,g_ϵ, g_ϵ', g_C are taken from Fig. <ref>(b). We compare to the CFT levels . We see that the component 3 agrees rather well. On the other hand the linear combination Ê^(7) agrees well for ϵ but not for σ. We do not consider this disagreement as contradicting our framework. Indeed the splitting between 3' and 4 for σ is huge. Applying our first-order effective theory in this case is clearly stretching it beyond its range of validity. § CONCLUSIONS AND OUTLOOK At the first glance, the exact diagonalization spectrum of the icosahedron in Fig. <ref> is a mess. In this paper we brought order to this chaos, relating this spectrum to the 3D Ising CFT spectrum by means of a first order effective theory perturbing the CFT Hamiltonian by integrals of local operators over the sphere. With just three effective couplings, and a speed of light parameter α, we managed to reproduce very well 8 energy levels corresponding to the descendants of the CFT operators σ and ϵ up to level 2, in a window of transverse magnetic fields around h_c∼ 4.375. We showed that the effective theory also tends to reproduce the energy levels of ϵ' and T, although the agreement is less than perfect, because not all corrections could be evaluated due to incomplete knowledge of CFT data. We found that the effective theory stops being fully successful for level 3 descendants. This is not so surprising because the splittings between states which have to be exactly degenerate in CFT become huge at this level. As any effective field theory, ours has finite range of validity, and large splittings mark this range. We thus consider Conjecture <ref> demonstrated for the icosahedron model. For the fuzzy sphere model of <cit.>, Conjecture <ref> will be demonstrated in the upcoming paper <cit.>. In the icosahedron model, we have not attempted to extract new CFT data from exact diagonalization. The icosahedron model is too small and “dirty” for that. However, the game will become much more interesting for the fuzzy sphere model of <cit.>. When comparing to CFT, the authors of <cit.> used only the speed of light parameter α. Indeed, finite N corrections in that model were already very small. As discussed in the introduction, this may be attributed to a double tuning of model's parameters, which has the chance to tune to zero the coefficients of the relevant and of the leading irrelevant _2 even scalar operators. When deviations are already small, applying effective theory should be an extremely lucrative enterprise. First of all, it will lead to further dramatic improvement in the agreement with CFT. Second, it will lead to new ways of determining the CFT data from the deviations themselves, since these deviations are controlled, as we have seen, by universal formulas depending on the CFT parameters, up to a handful of couplings which must be determined from a fit. The noise will thus become the signal. To give an idea, suppose that the deviations for CFT operators _1,_2 are proportional to the same effective coupling g_∫_S^2. Then, by measuring the deviation ratio we can determine the ratio of the OPE coefficients f__1_1/f__2_2. If one of these OPE coefficients is known (e.g. from the bootstrap), we thus determine the other. For the icosahedron model, the first-order conformal perturbation theory was sufficient to make the point. To fully leverage the power of the effective field theory for the fuzzy sphere model might require going to the second order in perturbing effective coupling. This promises a lot of interesting interplay between conformal field theory and exact diagonalization spectroscopy in the near future. SR thanks Benoit Sirois for collaboration at the early stages of this work. BL thanks Benoit Sirois, Ning Su, Zechuan Zheng and especially Junchen Rong for useful discussions. This work is supported by the Simons Foundation grant 733758 (Simons Bootstrap Collaboration). BL thanks the IHES for hospitality. SR thanks Jesper Jacobsen for references about finite size scaling in (1+1)D. § IRREDUCIBLE REPRESENTATIONS OF THE ICOSAHEDRAL GROUP In this appendix we introduce the irreducible representations of the proper icosahedral group I⊂ SO(3) which is isomorphic to A_5. The full icosahedral group I_h=I×_2^O(3). There are five irreducible representations, labeled by their dimensions 1 (trivial representation), 3, 3', 4 and 5. We can think of 3 and 5 as the vector and the symmetric traceless two-index tensor representations of SO(3), which remain irreducible under I. On the other hand, the 7-dimensional symmetric traceless three-index tensor of SO(3) splits as 3'+4 under I. See Table <ref> for the character table of I. § 3D ISING CFT In this appendix we collect known 3D Ising CFT data used in this work. Primary operators of the 3D Ising CFT are characterized by their scaling dimension Δ, spin ℓ, and _2 and _2^O(3) quantum numbers. All operators we need have _2^O(3)=1. Scaling dimensions of primary operators and their OPE coefficients are shown in Tables <ref>. In addition, we have f_TϵT= 0.8658(69) <cit.> . This was also determined in <cit.> who found f_Tϵ T=0.87(6) (in the normalization of <cit.>). Stress tensor OPE coefficient f_ T, for a primary scalar and T canonically normalized, is given by f_T = - d _/d - 1 1/Vol(S^2) . This is in the normalization where the 3pt function is given by (Z^μ = x_13^μ/x_13^2 - x_23^μ/x_23^2) ⟨(x_1) (x_2) T^μν (x_3)|=⟩ f_T /|x_12|^2_-1 |x_23| |x_31|(Z^μZ^ν - δ^μν/3Z^ρZ_ρ) . § MATRIX ELEMENTS In this appendix we will explain how to evaluate the energy correction (<ref>). In this paper we will need this formula for ψ corresponding to a scalar primary or its descendants up to level 3, as well as for ψ the stress tensor. We will focus on these cases. The perturbation will be a primary scalar or a symmetric traceless primary up to spin 4. The basic idea to compute ⟨ψ|δ H|ψ⟩ is that we can map × S^2 to ^3 via r=e^τ. Then the τ direction becomes the radial direction in ^3. The bra and ket states ψ are mapped to operator insertions at ∞ and 0, while has to be integrated over the unit sphere. §.§ Scalar and The most basic case is when and are scalar primaries. Then we have: ⟨| δH|⟩= ∫_|y|=1 g(y) ⟨(∞) (y) (0)⟩, where we abuse the notation and use y to parameterize the unit sphere embedded in ^3. As usual we have (∞):=lim_w→∞ |w^2|^Δ_(w). From the known expression for the CFT 3pt function ⟨(w) (y) (x)⟩, we have ⟨(∞) (y) (0)⟩=f_, independently of y, |y|=1. Integrating over the sphere and taking into account that the primary state is unit normalized we get, for , scalars: δE_= g_f_ , g_:=∫_S^2 g(y) . §.§.§ We next discuss the case of descendants of . For the first level descendants we have to evaluate: ⟨∂_μ| δH|∂_ν⟩= ∫_|y|=1 g(y) ⟨P_μ| (y) |P_ν⟩, where in the r.h.s. we have states in the radial quantization. The matrix element in the r.h.s. can be evaluated in two equivalent ways. The first way is to write ⟨P_μ| (y) |P_ν⟩= ⟨|K_μ(y) P_ν|⟩, and use commutation relations of the conformal algebra. The second way is to relate the computation to the known expression for the 3pt function, by writing: ⟨P_μ| (y) |P_ν⟩= lim_x→0 _x^μ ⟨^†(x) (y) ∂_ν(0)⟩ where ^†(x):= |x^2|^-_(x^μ/x^2) is the inversion applied to . Let us discuss next the consequences of the icosahedral symmetry I_h. Since δ H is I_h invariant, we must have: ⟨∂_μ| δH|∂_ν⟩ = B δ_μν . Indeed in general this matrix element has to be an invariant 2-tensor of I_h. Since I_h is an irreducible subgroup of O(3), all such tensors are multiples of δ_μν. The constant B can be found contracting (<ref>) with δ_μν. Then in the r.h.s. we will get ⟨ P_μ| (y) |P^μ⟩, which is a constant on the sphere. Therefore, B will be proportional to g_, i.e. insensitive to the deviations of g(y) from its average value. The final point is that we also have to compute the normalization of the descendant states, i.e. the constant in ⟨P_μ| P_ν⟩=δ_μν . This can be computed as in (<ref>) or (<ref>) setting = 1. Putting all these ingredients together, we find, for , scalars: δE_∂ = g_f_ A_,, A_,= 1 + C_ /6 _ . where C_ =_ (_ - 3) is the quadratic Casimir eigenvalue of a scalar primary. Note that all three states get the same energy correction. This is related to the fact that the vector representation of O(3) remains irreducible under I_h. §.§.§ The computation for _μ_ν is done similarly to _μ but there is an interesting detail. In CFT all 6 states _μ_ν are degenerate but they form two irreducible representations of O(3) - the singlet ^2 and the traceless symmetric 2-tensor “_μ_ν - trace” with 5 states. These two representations remain irreducible under I_h. Thus we expect two different corrections for these two subspaces: δ E^(1)_∂∂ and δ E^(5)_∂∂. The matrix element ⟨∂_μ_1_μ_2 | δH|∂_ν_1_ν_2 ⟩= ∫_|y|=1 g(y) ⟨P_μ_1P_μ_2| (y) |P_ν_1P_ν_2 ⟩ is computed as for descendants. It should be an invariant tensor of I_h with 4 indices and any such tensor is made of Kronecker δ's. This implies that, as for , we can reduce the computation to rotationally invariant matrix elements and function g(y) will enter only via g_. We omit the details and only give the final result, for , scalars: δE^(1)_∂ = g_f_ A^(1)_,, A^(1)_,= 1 + C^2_+ C_(8 _- 2)/12 _(2 _- 1) , δE^(5)_∂ = g_f_ A^(5)_,, A^(5)_,= 1 + C^2_ + 10 C_(2 _+ 1) /60 _(1 + _) . It is worth pointing out that the δ E^(1) correction becomes singular in the limit _→ 1/2. This is because the norm of the singlet state goes to zero in this limit. This is related to the fact that the _=1/2 scalar is free, hence ^2 =0. In the 3D Ising CFT, Δ_σ≈ 0.518 is close to 1/2. Hence, corrections for the singlet state ^2 σ are expected to be much larger than for the rest of level-2 descendants of σ. §.§.§ A new effect appears for the third level descendants _μ_1_μ_2_μ_3. There is a total of 10 CFT states in this level, split under O(3) as 3+7, which is the vector _μ^2 plus the symmetric traceless 3-index tensor “_μ_1_μ_2_μ_3- traces”. For a constant g(y), δ H preserves O(3) and we expect 2 independent corrections. For a non-constant g(y), δ H only preserves I_h. Under I_h, 3_O(3) remains irreducible and 7_O(3) splits as 3'+4. Thus for non-constant g(y) we expect three independent corrections at this level - the first time we will see the difference between O(3) and I_h symmetry. At the level of the matrix element ⟨_μ_1_μ_2 _μ_3 | δH|_ν_1_ν_2 _ν_3 ⟩= ∫_|y|=1 g(y) ⟨P_μ_1P_μ_2 P_μ_3 | (y) |P_ν_1P_ν_2 P_ν_3 ⟩ this is seen as follows. As usual this should be an invariant tensor of I_h. In addition to tensors built out of the Kronecker δ, the group I_h has one extra invariant six-index tensor 𝔖, see e.g. <cit.>. This tensor appears in the matrix element for non-constant g(y), and this contribution causes the splitting between 3' and 4 representations. Equivalently, one may ask for which minimal spin ℓ the spin ℓ representation of O(3) contains an I_h singlet. This happens first for ℓ=6. Consider a function on the unit sphere given in terms of the above tensor 𝔖 (which is symmetric and can be chosen traceless) as p_𝔖(y) = 𝔖_μ_1…μ_6 y^μ_1⋯y^μ_6 . This function is I_h invariant by construction but it belongs to the spin-6 representation of O(3). In addition to g_ we define the second average coupling by g̃_=∫_S^2 g(y) p(y)_𝔖 . It is this coupling which will govern the splitting between 3' and 4. Omitting the details, the 3 corrections are given by, for and primary scalars: δE^(3)_∂ = g_f_ A^(3)_,, δE^(3')_∂ = δE^(7)_ - 4/3 g̃_f_ B_, , δE^(4)_∂ = δE^(7)_ + g̃_f_ B_, . where δE^(7)_ = g_f_ A^(7)_, and A^(3)_, = 1 + C_^3 + C_^2 (22 _+6) + 20 C_(6 _^2 + 2 _-1) /120 _(_+ 1) (2 _-1) , A^(7)_, = 1 + C_^3 + C_^2 (42 _+46) + 140 C_ (3 _^2 + 6 _+ 2) /840 _(_+ 1)(_+ 2) , B_, = _^2 (_ + 2)^2 (_ + 4)^2/ _(_+ 1) (_+ 2) . The singularity of A^(3)_, as Δ_→ 1/2 has the same origin as for A^(1)_, in Eq. (<ref>). §.§ Corrections from =T_μν Let us discuss corrections when is the stress tensor. We need to consider the matrix element ∫_|y|=1 g_μν(y) ⟨ψ'_k| T_μν(y) |ψ_k⟩ , where ψ_k, ψ'_k are any two states in the multiplet at level k≥ 0, and g_μν(y) is an I_h invariant tensor function on the sphere. As discussed in Section <ref>, we may assume that y^μ g_μν(y) = 0. We have g_μν(y) = b (y_μy_ν- δ_μν)+ (𝔖)+… , where b is a constant, (𝔖) stands for I_h invariant functions constructed by contracting 𝔖_μ_1…μ_6 with powers of y, i.e. linear combinations of terms like 𝔖_μνμ_1…μ_4y_μ_1…y_μ_4 , y_(μ 𝔖_ν) μ_1…μ_5y_μ_1…y_μ_5, y_μy_νp_𝔖(y), δ_μνp_𝔖(y) , with p_𝔖(y) from (<ref>), while … in (<ref>) stands for terms involving higher invariant tensors of I_h. The first term in (<ref>) will give b ∫_|y|=1 ⟨ψ'_k| y^μy^νT_μν(y) |ψ_k⟩ . The integral here is nothing but the CFT Hamiltonian (i.e. dilatation operator), giving the scaling dimension of the state. Thus the matrix element equals b (Δ_+k) ⟨ψ'_k| ψ_k⟩ , Since this correction is exactly proportional to the CFT energy of the state, it is equivalent to a small correction in the speed of light parameter α in our fits. Therefore, we do not have to consider it. As for the terms (𝔖)+… in (<ref>), they will not contribute to the matrix element (<ref>) for k≤ 2, because the total number of indices in the states ψ_k, ψ'_k is not enough to saturate the indices of 𝔖. For k=3 the terms (𝔖) will contribute and they will give rise to additional shifts to 3' and 4, as in Section <ref>. Omitting the details the result is given by: δE^(4)_∂ = g̃_T f_T/_(_+1)(_+2) , δE^(3')_∂ = - 4/3 δE^(4)_∂ , where g̃_T is given by an integral of g_μν(y) against a tensor function constructed out of 𝔖 (we won't need the exact expression). Recall that f_ T∝_, see App. <ref>. The relative size (-4/3,1) of these corrections is the same as in Section <ref>, which is not accidental and can be interpreted via the Wigner-Eckart theorem. §.§ Corrections from =C_μνλσ The discussion has a lot of similarity to =T_μν. We consider the matrix element ∫_|y|=1 g_μνλσ(y) ⟨ψ'_k| C_μνλσ(y) |ψ_k⟩ , where ψ_k, ψ'_k are any two states in the multiplet at level k≥ 0, and g_μνλσ(y) is an I_h invariant tensor function on the sphere. As discussed in Section <ref>, we may assume that y^μ g_μνλσ(y) = 0. We have g_μνλσ(y) = g_C/4π (y_μy_νy_λy_σ- terms involving at least one δ)+ (𝔖)+… , where g_C is a constant and (𝔖) stands for I_h invariant functions constructed by contracting 𝔖_μ_1…μ_6 with powers of y and … with terms involving higher invariant tensors of I_h. The first term in (<ref>) contributes to the matrix element as g_C/4π ∫_|y|=1 ⟨ψ'_k| y^μy^νy^λy^σC_μνλσ(y) |ψ_k⟩ . Here the discussion deviates from =T_μν where such a correction was equivalent to renormalizing α, while here it is not, so we have to evaluate it carefully. This results in energy shifts, for levels k≤ 3: δE_ = g_C f_C , δE_ = g_C f_C A_,C δE^(1)_ = g_C f_C A^(1)_,C, δE^(5)_ = g_C f_C A^(5)_,C, δE^(3)_∂ = g_C f_C A^(3)_,C, δE^(7)_ = g_C f_C A^(7)_,C . Note that, as in the previous section, the energy shifts cause the split 6→ 1+5 for k=2 and 10→ 3+7 for k=3. We will give the expressions for the A coefficients in the general case when has spin ℓ (while ℓ=4 in the case at hand). The Casimir eigenvalue C_=Δ_(Δ_-3)+ ℓ(ℓ+1). These coefficients generalize the previously given ℓ=0 expressions and they are given by: A_, = 1 + C_ /6 _, A^(1)_, = 1 + C^2_+ C_(8 _- 2)- 4 ℓ( ℓ+1) ( C_ - (ℓ+2)(ℓ-1))/12 _(2 _- 1) , A^(5)_, = 1 + C^2_ + 10 C_(2 _+ 1) + 2ℓ(ℓ+1) ( C_ - (ℓ+2)(ℓ-1)) /60 _(1 + _) , A^(3)_, = 1 + C_^3 + C_^2 (22 _+6) + 20 C_(6 _^2 + 2 _-1) /120 _(_+ 1) (2 _-1) - 4 ℓ(ℓ+1) (C_ - (ℓ+2) (ℓ-1)) (C_ + 14 _+ 8) /120 _ (_+1) (2 _-1) , A^(7)_, = 1 + C_^3 + C_^2 (42 _+46) + 140 C_ (3 _^2 + 6 _+ 2) /840 _(_+ 1)(_+ 2) + 2 ℓ(ℓ+1) (C_ - (ℓ+2) (ℓ-1)) (3 C_ + 42 _+44)/840 _(_+ 1) (_+ 2) . For k≤ 2 the total number of indices in the states ψ_k, ψ'_k is not enough to saturate the indices of 𝔖. Therefore the terms (𝔖)+… in (<ref>) will not contribute to the matrix element (<ref>) for k≤ 2. For k=3 the terms (𝔖) will contribute and they will give rise to additional shifts to 3' and 4 given by δE^(4)_∂ = g̃_C f_C /_(_+1)(_+2) , δE^(3')_∂ = - 4/3 δE^(4)_∂, where the coupling g̃_C is given by an integral of g_μνλσ(y) involving 𝔖, and we also include into g̃_C some Δ_C dependent factors (we won't need the exact expression). utphys
http://arxiv.org/abs/2307.00217v1
20230701042859
Metric Learning-Based Timing Synchronization by Using Lightweight Neural Network
[ "Chaojin Qing", "Na Yang", "Shuhai Tang", "Chuangui Rao", "Jiafan Wang", "Hui Lin" ]
eess.SP
[ "eess.SP" ]
Metric Learning-Based Timing Synchronization by Using Lightweight Neural Network This work is supported in part by the Sichuan Science and Technology Program (Grant No. 2023YFG0316, 2021JDRC0003), the Special Funds of Industry Development of Sichuan Province (Grant No. zyf-2018-056), and the Industry-University Research Innovation Fund of China University (Grant No. 2021ITA10016). Chaojin Qing^∗, Na Yang^∗, Shuhai Tang^∗, Chuangui Rao^∗, Jiafan Wang^∗, and Hui Lin^† ^∗School of Electrical Engineering and Electronic Information, Xihua University, Chengdu, 610039, China ^† Hangtiankaite Electromechanical Technology Co., Ltd, Chengdu, 611730, China Email: ^∗[email protected], ^†[email protected] =================================================================================================================================================================================================================================================================================================================================================================================================== Timing synchronization (TS) is one of the key tasks in orthogonal frequency division multiplexing (OFDM) systems. However, multi-path uncertainty corrupts the TS correctness, making OFDM systems suffer from a severe inter-symbol-interference (ISI). To tackle this issue, we propose a timing-metric learning-based TS method assisted by a lightweight one-dimensional convolutional neural network (1-D CNN). Specifically, the receptive field of 1-D CNN is specifically designed to extract the metric features from the classic synchronizer. Then, to combat the multi-path uncertainty, we employ the varying delays and gains of multi-path (the characteristics of multi-path uncertainty) to design the timing-metric objective, and thus form the training labels. This is typically different from the existing timing-metric objectives with respect to the timing synchronization point. Our method substantively increases the completeness of training data against the multi-path uncertainty due to the complete preservation of metric information. By this mean, the TS correctness is improved against the multi-path uncertainty. Numerical results demonstrate the effectiveness and generalization of the proposed TS method against the multi-path uncertainty. Timing synchronization, OFDM, lightweight CNN, timing-metric objective, multi-path uncertainty § INTRODUCTION Orthogonal frequency division multiplexing (OFDM) has been subject to extensive research efforts not only from the fifth generation (5G) systems but also from the Internet-of-Things (IoT) systems<cit.>. In OFDM systems, a correct timing synchronization (TS) aims to find the starting of the receiver discrete Fourier transform (DFT) window within an inter-symbol-interference (ISI)-free region of an OFDM symbol<cit.>. Although synchronizing to this ISI-free region produces a phase rotation, this impairment can be easily countered by the channel equalization<cit.>. However, achieving this task is not easy due to the multi-path uncertainty. The multi-path uncertainty is caused by the rich and diverse communication environments<cit.> and manifested in wireless channels with varying power delay profile (PDP). Because of the multi-path uncertainty, the timing metric is usually corrupted in non-light-of-sight (NLOS) scenarios. Consequently, the timing error, i.e., starting of receiver DFT window located outside the ISI-free region, is appeared, which will in turn affect the subsequent signal processing. To combat timing errors caused by the multi-path uncertainty, an alternative method for improving the TS correctness is to employ the joint mode, such as joint the TS and channel estimation, as done in<cit.>. The method of joint TS and channel estimation<cit.> improves the TS correctness by partially counteracting the interferences of multi-path uncertainty. Nevertheless, this joint mode <cit.> results in a relatively high computational complexity. Against the impairments caused by multi-path fading, noise, etc., an alternative method for improving the TS correctness is to deploy neural networks (NNs). In this context, several machine learning-based studies have been conducted in finding high-performance TS methods for OFDM systems<cit.>. In <cit.>, a one-dimensional convolutional neural network (1-D CNN)-based TS method is investigated in OFDM systems, which improves the TS correctness relative to the conventional TS method. Yet, this method ignores the impacts of multi-path uncertainty. In <cit.>, the fine synchronization problem is investigated by assuming that the coarse TS and channel equalization have been achieved. Accordingly, <cit.> omits the consideration for multi-path interference, i.e., the multi-path uncertainty is neglected. While the work in <cit.> attempts to find ways to improve the TS correctness by designing training labels, the prerequisite of predicting the maximum multi-path delay limits its generalization performance. To summarize, due to the lack of considering the high computational complexity in<cit.> and the multi-path uncertainty in<cit.>, the machine learning-based TS for practical application is limited, inspiring us to investigate a lightweight machine learning-based TS method against the multi-path uncertainty. In this paper, we propose a lightweight timing-metric learning-based TS method in OFDM systems. To our best knowledge, against the multi-path uncertainty, the improvement of TS correctness by learning the timing metric has not been investigated. The main contributions are listed as below. * We propose the lightweight metric learning-based TS method. Different from <cit.>, the receptive field of 1-D CNN layer is specially designed and flexible according to the length of cyclic prefix (CP). Also, compared with <cit.>, the computational complexity of the designed neural network is significantly reduced. * From the perspective of de-noising task, we specially design the timing metric to be learned. Specifically, the impact of uncertain multi-path delay on timing metric is considered to design the timing metric, and the impact of uncertain multi-path gain is also considered into the training stage. Thus, the adaptability of NN-based TS against multi-path uncertainty is improved. § SYSTEM MODEL AND PROBLEM FORMULATION §.§ System Model An OFDM system with N sub-carriers is considered. At the transmitter, the time-domain OFDM symbol {s( n )}^N-1_n=0 is obtained by using the inverse DFT, i.e., s( n ) = ∑_k = 0^N - 1S( k )e^ j2πkn/N, where {S(k)}^N-1_k=0 denotes the data/training symbol at the kth sub-carrier in the frequency domain. 𝔼{|s( n )|^2}=P_t with P_t being the transmitted power. After appending the N_g-length cyclic prefix (CP), the transmitted signal consecutively passes through a multi-path fading channel. With a N_w-length observed interval at the receiver, the received sample is expressed as y( n ) = e^j2πεn/N·∑_l = 1^L h_ls( n - τ _l- θ) + w( n), where ε and θ respectively denote the normalized carrier frequency offset (CFO) and the unknown timing offset to be estimated. In (<ref>), h_l and τ_l are the complex gain and normalized delay of the lth arriving path, respectively. Meanwhile, τ_l=l-1 and 0≤τ_L<N_g are considered <cit.>. In (<ref>), w(n) represents the complex additive white Gaussian noise with zero-mean and variance σ_n^2. Then, the received N_w-samples {y( n )}^N_w-1_n=0 are buffered to form an observed vector 𝐲∈ℂ^N_w×1. To observe at least one complete training sequence, N_w≥2N+N_g is required, and thus a discrete searching interval of unknown timing offset is employed, with its length being N_s=N_w-N. In a classic synchronizer <cit.>, the timing metric is utilized to estimate the unknown θ. According to (<ref>), the timing metric, denoted as {M(d)}^N_s-1_d=0, is calculated as<cit.> M( d ) = |∑_k = 0^N - 1x^ *(k)y ( d + k)|^2/∑_k = 0^N - 1|y ( d + k)|^2, where {x(k)}^N-1_k=0 represents a local training sequence. Due to the impacts of multi-path fading, noise, etc., the metric in (<ref>) is easily impaired and then makes TS errors. Therefore, we develop a learning method to improve the TS correctness with lightweight network. §.§ Problem Formulation Against the multi-path uncertainty, we focus on the improvements of the computational complexity and the adaptability of the deployed learning-based TS in OFDM systems. Therein, M(d) in (<ref>) can be regarded as the extracted initial feature, and then the impairments (e.g., noisy, multi-path interference) represented in M(d) can be learned and remedied by NNs, as done in <cit.>. The de-noising problem can be mathematically formulated as min_ΘΓ - G_Θ( M,Θ)_2^2, where Θ is a set of network parameters to be optimized, and G_Θ(·) is a mapping function parameterized by Θ. In (<ref>), the vector forms that Γ=[Γ(0),Γ(1),⋯,Γ(N_s-1)]^T and 𝐌=[M(0),M(1),⋯,M(N_s-1)]^T denote the timing metric {Γ(d)}^N_s_d=1 to be learned and the initial feature {M(d)}^N_s_d=1 to be de-noised, respectively. Nevertheless, the trained G_Θ(·) may suffer from a severe TS error due to the multi-path uncertainty. This is due to the fact that the multi-path interference represented in M is randomly unpredictable. Therefore, M are uncertain to be hardly recognized, degrading the correctness of learning-based TS in wireless propagation scenarios. To handle this issue, the timing metric to be learned is specially designed to improve the TS correctness against multi-path uncertainty, which will be presented in Section III-B. § THE PROPOSED METRIC LEARNING-BASED TS §.§ Lightweight NN Architecture The proposed timing synchronizer is presented in Fig. <ref> and summarized in TABLE I, which consists of a classic correlator along with a NN process. In the NN block, the single-layer 1-D CNN and single-layer fully connected NN are considered. For 1-D CNN layer, the rectified linear unit (ReLU) is employed as the activation function. As for fully connected layer, the tanh and softmax functions are employed in the hidden and output layers, respectively. In the 1-D CNN block, the 1-D CNN deploys one convolution layer with 4 filters, and its the receptive field is selected as (N_g+1). Specifically, the receptive field of each filter layer is specially designed according to the finite lengths of channel impulse response (CIR) and CP. This is due to the fact that the significant TS features are mainly appeared at arriving paths, and also the CIR length is less than the CP length. The TS feature extracted by using (<ref>) can be simplified as M( d ) ≈P_t/ . -σ _n^2/1 + P_t/ . -σ _n^2·∑_l = 1^L h_lδ( d - τ _l - θ) . Since N_g is usually less than one quarter of the symbol length N (i.e., N_g<0.25N), the increase of computational complexity caused by a large receptive field can be alleviated. Thus, (N_g+1)-size receptive field is suitable to capture the significant TS features extracted by the classic correlator. Also, the number of filter is set by considering that one complex multiplication (CM) equals 4 floating point operations (FLOPs), i.e., filter number is set as 4. Thus, the total CMs of 1-D CNN processing approximately are equal to a CP-based correlation processing. In the fully connected NN block, its hidden layer is selected according to the maximum searching length of candidate timing offset, i.e., N_s. To further reduce the data dimension sent for the fully connected layer, an average pooling layer with patch equaling to the filter number is considered, i.e., 4-size patch. In summary, the designed 1-D CNN and fully connected NN are constructed according to the parameters of N_g and N_s, which are flexible in different scenarios. Meanwhile, by using CM as the evaluation of computational complexity, the computational complexity of the designed NN is 0.5N^2_s+N_sN_g, while the correlation process in (<ref>) requires N_sN. Since N_s and N_g are constrained by N_s=N_w-N=N+N_g and 0<N_g<0.25N, we will have 0.5N^2_s+N_sN_g-NN_s<0. Therefore, the designed NN is relatively lightweight compared with the classic correlator<cit.>. §.§ Timing Metric for TS Learning In the ISI-free region of per OFDM symbol, each sampling point can be regarded as the correct TS point<cit.>. Consequently, the timing metric to be learned can be expressed as Γ( d ) = ∑_θ̂= θ+τ̂_L+1^θ+N_gδ( d - θ̂), where θ̂ is the timing offset to be learned, and the τ̂_L denotes the normalized maximum multi-path delay for offline training. Usually, τ̂_L is assumed to be fixed during the training stage. However, due to the multi-path uncertainty, the real τ_L is unpredictable. For example, the root means square multi-path delay will change with time and propagation environments<cit.>, making τ_L uncertain. Thus, it is highly possible that τ_L≠τ̂_L, resulting in an incorrect labeling. When θ is fixed, the incorrect timing metric learned can be given by γ( d ) = ∑_θ̂= θ+τ̂_L+1^θ+N_gδ( d - θ̂)⊕∑_θ̂= θ+τ _L+1^θ+N_gδ( d - θ̂) = ∑_θ̂= θ+τ _L+1^θ+τ̂_Lδ( d - θ̂). When τ̂_L=τ_L, the case of γ( d )=0 can be achieved, which means ideally labeling. Due to the multi-path uncertainty, this case is hardly to be achieved. Hence, we relax this demand by jointly considering these following motivations: * Although the cases of τ̂_L≥τ_L make γ( d )≠0, the set {θ̂}^θ+τ̂_L_θ+τ_L+1 still belongs to the ISI-free region. * Since τ_L is difficult to be predicted, τ̂_L is no exception. Therefore, other priori information needs to be exploited for determining the value of τ̂_L. * As NNs can compensate for deficiencies by learning from a certain number of data set, τ̂_L for (<ref>) can be expanded according to a set of random variables. Given N_t-samples data set, we make {τ̂_L,i}^N_t_i=1 for (<ref>) satisfy that τ̂_L,i∼^i.i.dU[N_g/2, N_g-1]. By this mean, the timing metrics to be learned are expanded to increase the adaptability of NN against multi-path uncertainty. According to (<ref>), the main deficiency in (<ref>) is caused by the dynamically changed τ_L, resulting in error labeling and aggravating TS error. Since learning models can compensate for deficiencies by learning from the prior inputs and objectives, the features to be learned can be expanded to increase the adaptability of NN against multi-path uncertainty. To this end, the priori τ̂_L∼^i.i.dU[N_g/2, N_g-1] is derived to expand the features of timing metrics. Thus, the adaptability of trained model is enhanced against the uncertain τ_L. In Section III-C, the offline training and online deployment are described. §.§ Offline Training and Online Deployment §.§.§ Offline Training In this phase, N_t=50,000 is considered, which is split to the validation set and training set by 0.25. The data set is denoted as {𝐌_i,Γ_i}^N_t_i=1, in which 𝐌_i is obtained via (<ref>)–(<ref>) and Γ_i is obtained by using (<ref>). Therein, τ̂_L,i∼^i.i.dU[⌊ N_g/2⌋, N_g-1] is utilized to alleviate the effect of uncertain multi-path delay. An exponentially decayed channel model <cit.> with decayed exponent η is considered. Meanwhile, η_i∼^i.i.dU(0.01, 0.2) is employed to alleviate the effect of uncertain multi-path gains. Besides, θ_i∼^i.i.dU[0,N-1]. For the designed NN in TABLE I, optimizer employs the stochastic gradient descent (SGD) algorithm, and its initial learning rate is set as α=0.002 <cit.>. By respectively denoting B and J as the batch size and the number of steps, the network optimization is defined as<cit.> Θ _q + 1←Θ_q - α∇1/B∑_i = (q-1)B+1^qBG_Θ_q( M _i,Θ_i) - Γ_i_2^2 , where the subscript q denotes the qth iterative step for optimizing, and 1≤ q≤ J. §.§.§ Online Deployment By using (<ref>)–(<ref>), M(d) is obtained, forming 𝐌=[M(0),M(1),⋯,M(N_s-1)]^T. Then, with the optimized G_Θ(·), the model output, denoted as O∈ℝ^N_s×1, is given by O = G_Θ( M). Finally, by expressing O as [O(0),O(1),⋯,O(N_s-1)]^T, the estimated timing offset is θ = max_0 ≤ d ≤N_s - 1{O(d)}. In Section IV-B and Section IV-C, the effectiveness and generalization of the proposed TS method against the multi-path uncertainty are presented. § SIMULATION RESULTS In the simulations, we consider basic parameters as that N=128, N_g=⌊ N/4⌋=32<cit.>, N_w=2N+N_g=288, and N_s=N_w-N=160. For simulated channel models, they have not been utilized for offline training. For the sake of clarity, we list the abbreviations of different timing synchronization methods in TABLE II. §.§ Computational Complexity The comparison of computational complexity among different TS method is illustrated in TABLE <ref>. Therein, the total channel paths are selected as 23, i.e., L=23, and other parameters are adopted from Section IV-A. According to TABLE <ref>, “Prop” reaches the smallest CM among the given TS methods. Therefore, “Prop” has the superiority in realizing lightweight network. §.§ Effectiveness Analysis To analyze the effectiveness, Fig. <ref> depicts the error probability of TS. Wherein, an un-trained exponential decayed factor that η=-ln(10^-10/10)/(L-1)<cit.> is utilized and the maximum multi-path delay changes from 22T to 27T, which are utilized to simulate the multi-path uncertainty. In Fig. <ref>, for each given value of τ_L, the probability of TS error for “Prop” is smaller than those of “Ref<cit.>”, “Ref<cit.>”, and “Ref<cit.>”. Meanwhile, for all given SNRs, “Prop” achieves a lower probability of TS error than “DNN”. This is because CNN is easier to capture data features compared with DNN methods. It is noteworthy that, although τ_L increases from 22 to 27 caused by the multi-path uncertainty, “Prop” exhibits slight generalization error than “Prop with fixed τ̂_L”, due to the use of the designed timing-metric objective. Besides, “Prop” reaches a smaller TS error than “Prop without Γ”, which demonstrates the benefits of learning timing-metric. In summary, the performance improvements of “Prop” is effective against the multi-path uncertainty. §.§ Generalization Analysis Fig. <ref> plots the comparison of the error probability of TS to analyze the generalization performance of “Prop” against different 5G tapped-delay-line (TDL) channel models<cit.>. Notably, these channel models have not been used for offline training. For each given channel model, “Prop” achieves smaller probability of TS error among the given TS methods in the whole SNR region. Besides, for “Prop”, the fluctuation in the probability of TS error caused by different un-trained channel models are not obvious. Therefore, the proposed TS method (i.e., “Prop”) has a good generalization capability against different 5G TDL channel models. § CONCLUSION In this paper, we investigate a lightweight timing-metric learning-based TS in OFDM systems, which alleviates the multi-path uncertainty by utilizing the designed timing-metric objective. Different from <cit.>, against the multi-path uncertainty, we utilize the proposed lightweight network along with the designed learning solution to learn the timing metric, which improves the TS correctness and generalization performance with less computational complexity. By simulations, numerical results exhibit the superiority of the proposed method in reducing the error probability of TS against multi-path uncertainty, whilst revealing its good generalization performance against different un-trained 5G TDL channel models. § ACKNOWLEDGMENT This work is supported in part by the Sichuan Science and Technology Program (Grant No. 2023YFG0316, 2021JDRC0003), the Special Funds of Industry Development of Sichuan Province (Grant No. zyf-2018-056), and the Industry-University Research Innovation Fund of China University (Grant No. 2021ITA10016). ieeetran
http://arxiv.org/abs/2307.01516v1
20230704065618
Noisy Games: A Study on the Effect of Noise on Game Specifications
[ "Constantinos Varsos", "Giorgos Flouris", "Marina Bitsaki" ]
cs.GT
[ "cs.GT" ]
New Designs of Robust Uplink NOMA in Cognitive Radio Inspired Communications Yanshi Sun1, Wei Cao1, Momiao Zhou1 and Zhiguo Ding2 1School of Computer Science and Information Engineering, Hefei University of Technology, Hefei, China 2 Department of Electrical Engineering and Computer Science, Khalifa University, Abu Dhabi, UAE, and Department of Electrical and Electronic Engineering, University of Manchester, Manchester, UK. Email: 1{[email protected], [email protected],[email protected],}, [email protected] ===================================================================================================================================================================================================================================================================================================================================================================================================================================================================== We consider misinformation games, i.e., multi-agent interactions where the players are misinformed with regards to the game that they play, essentially having an incorrect understanding of the game setting, without being aware of their misinformation. In this paper, we introduce and study a new family of misinformation games, called Noisy games, where misinformation is due to structured (white) noise that affects additively the payoff values of players. We analyse the general properties of Noisy games and derive theoretical formulas related to “behavioural consistency”, i.e., the probability that the players behaviour will not be significantly affected by the noise. We show several properties of these formulas, and present an experimental evaluation that validates and visualises these results. § INTRODUCTION A common assumption in game theory <cit.> is that the abstract formulation of the game (number of players, strategies available to the players and payoffs depending on the chosen strategies) are publicly available to all players. Even for games with incomplete information <cit.>, the fact that knowledge is incomplete, and the exact form of incompleteness, is embedded in the game specification. However, in several scenarios, it could be the case that the players may have wrong information regarding the game setup, and at the same time being unaware of the fact that their information is wrong, thus being misinformed. The agents, being unaware of their misinformation, may make choices that are unexpected and seem irrational from the external viewpoint, leading to unexpected results. These games are called misinformation games <cit.> (see also Figure <ref>). The main defining characteristic of misinformation games is that the players have no reason to believe that they have the wrong payoff information, and will play the game under the misconceived definition (payoffs) that they have. Nevertheless, the payoff that they will get is the one provisioned by the actual game. This makes the concept of misinformation games quite different from other types of games that have been defined in the literature, in particular games that incorporate uncertainty in their payoffs (e.g., Bayesian games <cit.>); in games with incomplete information, although uncertainty makes the players unsure as to their actual payoff for each different strategy, the players are well-aware of that, and they accommodate their strategies accordingly, in order to make the best out of the uncertainty that they have. On the contrary, in misinformation games the players believe the information that was given, and do not consider mitigation measures “just in case” the information is wrong. In previous works <cit.>, various different causes of misinformation are identified, including deception and misleading reports, human errors, deliberate attempts by the game designer to channel players into different behaviours, erroneous sensor readings and random effects. In this paper, we focus on a special case of misinformation, attributed to noise and signal errors, a situation often occurring in distributed multiagent systems. This class of misinformation games will be called noisy games. Specifically, in distributed multiagent systems, agents[Note that we use the terms “agent” and “player” interchangeably throughout the paper.] are equipped with an internal logic that allows them to autonomously solve problems of a given nature. However, at deployment time, the precise specification of these problems is often unknown; instead, the details are communicated as needed at operation time, during the so-called “online phase” <cit.>. In such cases, unexpected communication errors, malfunctions in the communication module or noise may cause the agents to operate under a distorted problem specification, leading to unexpected behaviour. For example, consider the scenario where we have two autonomous self-interested agents, already deployed in an unfriendly environment. At some point in time, the human controller asks each of the agents to choose among two actions, also specifying the payoffs for each combination of choices. If the communication goes through as expected, then the behaviour of the agents is predictable by the well-known results of game theory. However, if one (or both) of the agents' communication module malfunctions, or if there is unexpected noise in the communication channel, the signal may arrive distorted. This could lead agents to receive an erroneous payoff matrix, essentially causing them to believe that they play a game different than the one communicated to them, with unpredictable results (Figure <ref>). Note that, if, at deployment time, the designer had foreseen the possibility for the agents to receive an erroneous game specification, then the agents would have been programmed to treat all signals as uncertain (i.e., true under a certain probability). In this case, the possibility of error is integrated in the agents' logic (even when no communication error occurs), and their behaviour can be modelled using the rich results on Bayesian games and games with incomplete information <cit.>. On the other hand, if such a scenario had not been foreseen at deployment time, then the agents will operate under the payoff matrices received, without considering the possibility that the payoff matrices are not the correct ones. This is quite different, as the agents' decisions will be totally misled by the erroneous setting, and will not consider mitigation measures “just in case” the specification that they received is wrong. The aim of this paper is to provide the theoretical machinery necessary to study scenarios of this kind. In particular, the main research question to be addressed is: given a game and a specific noise pattern affecting the players' perceived payoff matrices, compute the probability that players' behaviour (i.e., chosen strategies) will be as close as possible (in a manner to be formally defined later) to the behaviour that they would have in the absence of noise. In summary, the main contributions of this paper are the following: * The provision of motivation for the need to define misinformation in the context of noisy games, by positioning our work with respect to other similar efforts in the literature, in particular related to games with uncertainty, and games where the players have some kind of misconception related to the game's payoffs (Section <ref>). * The definition of a formal model for the description of misinformation in noisy games (Section <ref>). * The computation of the probability that the players' behaviour is not significantly affected by random noise, a feature that we call behavioural consistency (Section <ref>). * A thorough analysis of the properties of these probabilities (Section <ref>). * Experimentation to visualize and validate our results (Section <ref>). § RELATED WORK The works most related to the concept of noisy games are those of games with misspecified views (e.g., <cit.>, <cit.>, <cit.>). Despite the fact that Bayesian techniques are very popular in this stream of works, there is no consideration about the structure of misinformation that results in these views. As opposed to Bayesian games, where there is a rich literature that studies the influence of the structure of uncertainty in the knowledge of the players to their strategic behaviour. Thus, conceptually we are closer to the first group of works. On the other hand though, we study too the effect of distributions in the knowledge of the players as to their strategic behaviour, thus our results can be related to the latter stream of works. Bayesian games and games with incomplete information <cit.> have been introduced to handle uncertainty about players' payoffs. This uncertainty is represented by probabilities over the alternative payoff matrices. Also, there are cases where some payoffs are simply unknown. In all cases, the players are aware of the fact that the information they have is incomplete, and their strategies are adapted to cater for this incompleteness. This is the main difference with regards to misinformation games, where information is complete, but incorrect. That is, players are unaware of the fact that the information they have is incorrect, and thus, their strategic choices are entirely based on it. The works <cit.> are also relevant to ours. In these studies, the authors consider non-atomic routing games, and suggest that the players experience their own cost functions, which are potentially different from the actual ones (e.g., to model player-specific biases). This setting is similar to a misinformation game, except that, in our methodology, each player has a potentially different view of the entire game (including the payoffs of the other players), not just her own payoffs, and plays according to that view. In addition, the two approaches have a quite different motivation: in <cit.> authors assume that players modify their payoffs from the objective ones, by themselves, for personal reasons (bias or some kind of personal preference); in our case, the modifications are accidental, caused by communication errors. In <cit.>, the authors consider the impact of small fluctuations in the cost functions or in players’ perceptions of the cost structure in congestion and load balancing games, and study its effect on players' behaviour. A fluctuation is a departure from the classical viewpoint that treats payoffs as a number; under <cit.>, the payoff is a range of values “close” to the actual payoff. An extension of <cit.> considered normal-form games, aiming to define a new notion of equilibrium that maximizes the worst case outcome over possible actions by other players <cit.> in the presence of fluctuations, whereas a further extension (<cit.>) studied the robustness of this equilibrium solution, utilizing the notion of approximations of payoffs using a fuzziness to the values of payoff matrices. There are several differences of the concept of fluctuation as compared to misinformation. First, the players are aware of the fluctuations and, thus, take them into account while deciding on their strategic choices. Second, fluctuations affect all players and all payoffs uniformly. Third, fluctuations have a limited effect, whereas the noise considered in our work may have unlimited effect (subject to a certain probability function). Further, in <cit.> the authors study how resilient is the strategic behaviour of players when an unexpected communication loss occurs, and explore game settings in which communication failures can/cannot cause harm in the strategic behaviour of players. They introduce the notion of proxy payoffs in order to funnel communication failures and show that, in several settings, loss of information may cause arbitrary strategic behaviours. Our work has a similar contribution as both works prove that in the presence of communication inefficiencies any strategic behaviour is possible. Though, authors in <cit.> focus on how the agents choose policies so as to cope with communication failures, in this study we analyse the impact of disorder in the strategic behaviour of the players. Also, we model communication failure using probabilities, and provide formulas that quantify the probability of arbitrariness in strategic behaviours, when information is degraded due to noise. Moreover, there is another stream of works considering random payoff matrices, (i.e. <cit.>) where the focus is on the distribution of pure Nash equilibria. A tweak of this methodology is presented in <cit.> where authors study the distribution of players' average social utility. § PRELIMINARIES §.§ A brief refresher on probability theory We provide here some basic knowledge on probability theory that will be useful in the following sections. The interested reader is referred to <cit.> for further details. A random variable X is characterised by its probability density function (pdf), denoted by f_X, which represents the “intensity” of the probability in each given point. The pdf can be used to compute the probability that X falls within a given range, say [a,b], for any a ≤ b. Formally, f_X is such that: a ≤ X ≤ b = ∫^b_a f_X(x) dx We denote by F_X the cumulative distribution function (cdf) of a random variable X, which equals the probability that the value of X is at most x. Formally: F_X(x) = ∫^x_-∞ f_X(t) dt = X ≤ x In this paper, we focus on random variables X following the normal distribution, denoted by X ∼μ,^2 (for some mean μ∈ℝ and standard deviation[In probability theory, standard deviation is typically denoted by σ. To avoid confusion with the strategies of normal form games which use the same symbol (see Subsection <ref>), we use as a symbol for standard deviation in this paper.] > 0). For the special case where μ = 0, = 1 (i.e., when X∼0,1), we get the standard normal distribution, with the following pdf (ϕ) and cdf (Φ): ϕ(x) = 1√(2π) e^- x^22, Φ(x) = 1√(2π)∫^x_-∞ e^-t^2/2 dt For the general case, where X ∼μ,^2, the pdf and cdf are: f_X(x) = 1/ϕ(x-μ/) = 1√(2π) e^- ( x - μ√(2))^2 F_X(x) = Φ(x-μ/) = 1√(2π)∫^x-μ_-∞ e^-t^2/2 dt It has been shown that, if X_i ∼μ_i, _i^2, c_0,c_i ∈ℝ, then: c_0 + ∑ c_i X_i ∼c_0 + ∑ c_iμ_i, ∑ c_i^2 _i^2 Given two events A,B, the symbol A | B denotes the conditional probability of A given B, which amounts to the probability that A is true under the condition that B is true. When combining events, the following are true: General Conjunction Rule: A ∧ B = B·A | B = A·B | A Restricted Conjunction Rule: A ∧ B = A·B (when A, B are mutually exclusive) General Disjunction Rule: A ∨ B = A + B - A ∧ B Restricted Disjunction Rule: A ∨ B = A + B (when A, B are mutually exclusive) §.§ Normal-form games Normal-form games <cit.> is the most commonly-studied class of games. A game in normal-form is represented by a payoff matrix that defines the payoffs of all players for all possible combinations of pure strategies. Formally: A normal-form game G is a tuple G = N, S, P, where: * N ={1,2,…,n} is the set of players. * S = S_1×…× S_n, S_i is the set of pure strategies of player i∈ N. * P = (P_1; …; P_n), P_i ∈ℝ^| S_1|×…×| S_n| is the payoff matrix of player i ∈{1,2,…,n}. In this paper we focus on 2 × 2 bimatrix games, a popular class of games defined as follows: A 2× 2 bimatrix game G is a normal-form game G= N, S, P, such that: * N = {r, c} is the set of players * S = S_r × S_c, where S_r = S_c = {s_1,s_2} * P = (P_r; P_c), P_r,P_c ∈ℝ^2× 2 Let us now fix some player x∈{r,c}. A strategy of x is a pair σ_x = (σ_x,1, σ_x,2), where (σ_x,1, σ_x,2) form a discrete probability distribution over S_x (i.e., σ_x,1,σ_x,2∈ [0,1], σ_x,1 + σ_x,2 = 1). When σ_x,1, σ_x,2∈ (0,1) the strategy is called a mixed strategy; otherwise, it is called a pure strategy. The support of a strategy σ_x, denoted by supp(σ_x), is the set of pure strategies (from S_x) that are played with positive probability on σ_x (thus, supp(σ_x) ⊆ S_x). We denote by Σ_x the set of all possible strategies of player x. Apparently, for 2 × 2 bimatrix games, Σ_x = {(p,1-p) | 0 ≤ p ≤ 1 }. A strategy profile is a pair σ = (σ_r, σ_c), for σ_r ∈Σ_r, σ_c ∈Σ_c. We denote by Σ the set of all strategy profiles, i.e., Σ = Σ_r ×Σ_c. A strategy profile is called pure if it consists of pure strategies only, mixed if it consists of mixed strategies only, and hybrid if it consists of a pure and a mixed strategy. The payoff function of player x, under a given strategy profile σ = (σ_r, σ_c), u_x: Σ→ℝ, is defined as: u_x(σ_r,σ_c) = σ_r^T P_x σ_c where σ_r^T represents the transposition of vector σ_r. For x ∈{r,c}, we denote by x̅ the other player, i.e., r̅ = c, c̅ = r. Given a strategy σ_x of x ∈{r,c}, the best response of x̅ is the strategy σ_x̅ that maximizes her payoff, given σ_x. A Nash equilibrium is a strategy profile for which any unilateral change in the strategy of any given player would not produce a better payoff for that player. In other words, a Nash equilibrium is a strategy profile where each player plays her best response, given the other player's strategic choice. For bimatrix games, this notion can be formalised as follows: A strategy profile σ^* = (σ^*_r, σ^*_c) is a Nash equilibrium if and only if, for any σ̂_r ∈Σ_r, σ̂_c ∈Σ_c, σ^*_r^T P_r σ^*_ c≥σ̂_r^T P_r σ^*_c and σ^*_r^T P_c σ^*_c ≥σ^*_r^T P_c σ̂_c It has been shown that all games possess at least one Nash equilibrium <cit.>. If σ^* = (σ^*_r, σ^*_c) is a Nash equilibrium, then σ^*_r, σ^*_c are called Nash equilibrium strategies. We denote by NE(G) the set of all Nash equilibria for a game G, and by NE_x(G) the Nash equilibrium strategies of player x in G. A 2 × 2 bimatrix game is called degenerate if and only if there is a pure strategy that has two pure best responses. In the seminal work of <cit.>, the authors defined a metric, the Price of Anarchy (PoA), that measures the efficiency of a system with non-cooperative players. Let SW(σ) be the social welfare function defined as the sum of players' payoffs for the strategy profile σ, and opt the socially optimal strategy profile, i.e., opt = _σ SW(σ). Then, PoA is defined as follows: Given a normal-form game G, the Price of Anarchy (PoA) is defined as: PoA = SW(opt)/min_σ∈ ne SW(σ) §.§ Notational conventions and shorthands To avoid confusion caused by the use of multiple indices in subsequent sections, we will use the notation A[i,j] to refer to the element in the i^th row and j^th column of a matrix A, i.e., if A=(a_ij), then A[i,j] = a_ij. We will use boldface to indicate tables whose elements are all equal to a certain value. For example b_n× m represents the n× m table B, such that B[i,j] = b for all i,j. The n× m subscript will be omitted when obvious from the context. For three tables A, M, of the same dimensions, we write A ∼M, to indicate that A[i,j] ∼M[i,j],[i,j] for all i,j. Analogously, we extend the notation used in limits (x → a) for tables. In particular, for two tables X,A of the same dimension, we write X → A to denote X[i,j] → A[i,j] ∀ i,j. We extend the notation to include infinity, e.g., X →+∞ is equivalent to X[i,j] → +∞ for all i,j. We define operators on payoff matrices as follows. Consider a 2× 2 bimatrix game G = N,S,P, where P=(P_r;P_c). Then: * For 2 × 2 tables M_r, M_c, D_r, D_c, the expression G ∼(M_r;M_c),(_r;_c) indicates that P_r ∼M_r, _r, P_c ∼M_c,_c * For a 2× 2 bimatrix A=(A_r;A_c) and λ∈ℝ, the result of the operation λ G + A is the 2× 2 bimatrix game G' = N',S',P', where N'=N, S'=S, P' = λ P + A § MISINFORMATION GAMES AND NOISY GAMES §.§ Basic Definitions Misinformation games have been originally defined in <cit.>. In this section, we extend the main definitions and concepts for the case of noisy games. Misinformation games have been defined to capture the idea that different players may have a different view of the game they play (see Figure <ref>). This leads to the following definition: A misinformation normal-form game (or simply misinformation game) is a tuple mG = G^0, G^1, …, G^n, where all G^i are normal-form games and G^0 contains n players. In the above definition, G^0 is called the actual game (corresponding to the game actually being played), whereas G^i is the game of player i (corresponding to the game that player i thinks that it is being played). In <cit.>, it was shown that, without loss of generality, we only need to concentrate ourselves in the special class of canonical misinformation games. A misinformation game G^0, G^1, …, G^n is called canonical if and only if: * In G^0, all players have an equal number of pure strategies * For any x ∈{1,…,n}, G^0, G^x differ only in their payoff matrices Noisy games are a special class of misinformation games, where misinformation is due to a random distortion in the original payoff matrix. Formally: A noisy game is a canonical misinformation game mG =, where G^i = G^0+Δ^i for some matrix Δ^i whose elements follow a certain probability distribution. Note that the restriction of a noisy game being canonical implies that noise affects only the payoff matrix. In a more general scenario, noise could also affect the number of players and/or the strategies that a player understands (knows) regarding a game. However, as shown in <cit.>, we can restrict ourselves to canonical games for simplicity. In this paper, we concentrate on noisy games whose actual game is a 2 × 2 bimatrix game, and where each element of Δ^i follows the normal distribution. We call such games normal noisy games. Therefore: A normal noisy game is a tuple mG =, where: * G^0, G^r, G^c are 2 × 2 bimatrix games * For x∈{r,c}, G^x = G^0 + Δ^x, where Δ^x is a bimatrix whose elements follow the normal distribution (possibly for a different mean and standard deviation) For M = (M_r;M_c), = (_r;_c), we write mG ∼ G^0 + M, to indicate a normal noisy game mG = G^0, G^r, G^c, where G^x = G^0 + Δ^x, Δ^x ∼M_x,_x. Formula <ref> implies that, when mG ∼ G^0 + M,, then G^x ∼G^0 + M_x, _x. §.§ Strategies, strategy profiles and equilibria in misinformation games We define strategies and equilibrium concepts for the case of normal noisy games of two players. Our formulation can be extended to apply to arbitrary misinformation games. Consider a canonical normal noisy game mG. A misinformed strategy σ_x for player x in mG is a strategy in G^x. A misinformed strategy profile results by the agglomeration of misinformed strategies for the individual game, and is defined as a pair σ = (σ_r,σ_c), where σ_x is a misinformed strategy of x ∈{r,c}. Pure/mixed misinformed strategies, and pure/mixed/hybrid strategy profiles are defined analogously to their standard counterparts (see Subsection <ref>). Since mG is canonical, a misinformed strategy (and misinformed strategy profile) is also a strategy (strategy profile) in G^0. Thus, we simply use Σ_x to denote the set of misinformed strategies of player x in mG, and Σ to denote the set of misinformed strategy profiles of mG. It is important to note that, although the decisions of a player are made based on his own payoff matrix (the one in G^x), payoffs are computed on the basis of the actual payoff matrix (the one in G^0). This is reflected in the definition of payoffs and equilibria below. Let P^0=(P_r^0;P_c^0), P^r=(P_r^r;P_c^r), P^c=(P_r^c;P_c^c) be the payoff matrices of G^0, G^r, G^c respectively. Then: * The actual payoff function of player x, under a given strategy profile σ = (σ_r, σ_c), u_x: Σ→ℝ, is defined as: u_x(σ_r,σ_c) = σ_r^T P_x^0 σ_c * The misinformed payoff function of player x, under the viewpoint of player y and the strategy profile σ = (σ_r, σ_c), u^y_x: Σ→ℝ, is defined as: u^y_x(σ_r,σ_c) = σ_r^T P_x^y σ_c Note that the actual payoff function represents the payoff that player x will receive as a response to her strategic choices. On the contrary, the misinformed payoff function represents the payoff that player x believes that she will receive, under the (erroneous) view of the game that player y has. A notion of equilibrium, defined in <cit.>, considers misinformed equilibrium simply as the agglomeration of the Nash equilibrium strategies of each player in her own game: A misinformed strategy, σ^*_x, of player x, is a natural misinformed equilibrium strategy, if and only if it is a Nash equilibrium strategy for x in G^x. A misinformed strategy profile σ^* is called a natural misinformed equilibrium (nme) if it consists of natural misinformed equilibrium strategies. The natural misinformed equilibrium will occur in one-off settings, i.e., when each player just picks a (seemingly optimal) strategy based on his own viewpoint. It is easy to see that at least one natural misinformed equilibrium exists in any misinformation game. Inspired by <cit.>, the authors of <cit.> defined a metric to measure the effect of misinformation compared to the social optimum, based on a social welfare function SW. This metric is called Price of Misinformation (PoM), and is defined as follows: Given a misinformation game mG, the Price of Misinformation (PoM) is defined as: PoM = SW(opt)/min_σ∈ nme SW(σ) Apparently, if PoM = 1, the players adopt optimal behaviour, due to misinformation. Moreover, interesting results can be derived by comparing the PoA of G^0 with the PoM of mG: if PoM < PoA, then misinformation has a beneficial effect on social welfare, as the players are inclined (due to their misinformation) to choose socially better strategies; on the other hand, if PoM > PoA, then misinformation leads to a worse outcome, from the perspective of social welfare. §.§ Behavioural Consistency and closeness The misinformed equilibria of a normal noisy game may be different than the Nash equilibria of the actual game. We define a metric to quantify the distance among these equilibria and their respective strategies, essentially measuring the effect of noise on the behaviour of the players. For the definition, we use the infinite norm distance for vectors; formally, for a vector v⃗ = (v_1,v_2), the infinite norm distance v⃗_∞ = max{v_1,v_2}. The notion of ε-closeness is now defined as follows: Let σ = (σ_1, σ_2), σ' = (σ_1', σ_2') be two strategies and ε≥ 0. Then we say that σ, σ' are ε-close if and only if supp(σ) = supp(σ') and σ-σ' _∞≤ε. For strategy σ, the set of strategies that are ε-close to it, is denoted by σ. Intuitively, the definition states that two strategies are ε-close if and only if they have identical supports and the allocation imposed by the players' strategies does not differ by more than ε in any dimension. The fact that ε-closeness requires identical supports is based on the idea that adding (or removing) a pure strategy to (from) the support of a strategy is considered a major change in the player's behaviour. Note also that the above definition applies on strategies in general, and, thus, allows us to apply it also to check ε-closeness among strategies and/or misinformed strategies, as long as in each strategy profile each player has the same number of pure strategies. We extend Definition <ref> to (misinformed and non-misinformed) strategy profiles (and equilibria), in the obvious manner: σ = (σ_r,σ_c) is ε-close to σ' = (σ_r',σ_c') if and only if σ_r is ε-close to σ_r' and σ_c is ε-close to σ_c'. We denote by σ the strategy profiles that are ε-close to σ. For a set of strategy profiles Σ^*, we set Σ^* = ⋃_σ∈Σ^*σ, i.e., the strategy profiles that are ε-close to at least one of the strategy profiles in Σ^*. The definition of ε-closeness gives formal substance to the idea of the behaviour of the players (expressed as an equilibrium) being “similar”: two equilibria that are ε-close are “similar” (and vice-versa). This notion allows us to formally define the behavioural consistency of players in the presence of noise, which amounts to checking whether the equilibria of the noisy game are similar (i.e., ε-close) to the “expected” ones under the actual game. Formally: Consider a normal noisy game mG and some tolerance ε≥ 0. Then, * mG is ε-misinformed iff for every natural misinformed equilibrium σ^* of mG, there is a Nash equilibrium σ^0 of G^0, such that σ^* ∈σ^0. * mG is inverse-ε-misinformed iff for every Nash equilibrium σ^0 of G^0, there is a natural misinformed equilibrium σ^* of mG, such that σ^* ∈σ^0. §.§ Running example The following example will be used as a running example for the rest of the paper to illustrate our results. [Running example] We consider two autonomous robotic agents r, c, deployed in a remote environment. At some point in time, the human controller asks each of the agents to choose among two actions s_1, s_2, also specifying the payoffs for each combination of choices, as shown in matrix P^0 below. P^0 = ( [ (3,2) (0,0); (0,0) (2,3); ]) The above payoff matrix corresponds to the well-known Battle of the Sexes (BoS) game[See <https://en.wikipedia.org/wiki/Battle_of_the_sexes_(game_theory)>.], which has three Nash equilibria, namely σ^0_1 = ((1,0),(1,0)), σ^0_2 = ((0,1),(0,1)), σ^0_3 = ((0.6,0.4),(0.4,0.6)). However, one of the components of the central communication module has received damage, unknowingly to the agents or the human controller, causing it to introduce a random noise (δ∼0,1) to each of the values in P^0 during transmission. The above setting can be modelled as a normal noisy game mG = G^0, G^r, G^c∼ G^0 + M^x_y,^x_y, where the payoff matrix of G^0 is P^0, and M^x_y = 0_2× 2, ^x_y = 1_2× 2 for all x,y∈{r,c}. Our objective here is to compute the probability that the robotic agents will exhibit behavioural consistency (Definition <ref>), despite the noise caused by the malfunction. This question will be addressed in Section <ref> below. § PROBABILITIES FOR BEHAVIOURAL CONSISTENCY In this section, we will compute the probabilities for a normal noisy game being (inverse-)ε-misinformed. For better readability, we split our analysis in 3 subsections. In Subsection <ref>, we recast some known results from game theory in a way that is more suitable for our analysis, whereas in Subsection <ref>, we develop some results that determine necessary and sufficient conditions for a misinformation game to be (inverse-)ε-misinformed. These results are then employed in Subsection <ref> to compute the required probabilities. The respective results are summarized in Table <ref> (for Subsection <ref>), Table <ref> (for Subsection <ref>) and Tables <ref>, <ref> and <ref> (for Subsection <ref>). §.§ Determining Equilibrium Strategies For a 2 × 2 bimatrix game G, we denote by [G]xi the utility gain of strategy s_1 (compared to s_2) for player x ∈{r,c} when her opponent plays s_i, in game G. The reference to G will be omitted when obvious from the context. Note that xi is determined by the elements of the payoff matrix of G (say P= (P_r;P_c)) as follows: * For x = r, ri = P_r[1,i] - P_r[2,i] * For x = c, ci = P_c[i,1] - P_c[i,2] Intuitively, xi > 0 would mean that player x would play s_1, if her opponent chooses to play s_i, i.e., that s_1 is the best response (for x) to s_i. Similarly, xi < 0 would mean that player x would play s_2, if her opponent chooses to play s_i, i.e., that s_2 is the best response (for x) to s_i. Finally, when xi = 0, then player x is indifferent as to whether to play s_1 or s_2, i.e., it has two pure best responses for her opponent's pure strategy s_i, indicating that the game is degenerate. [Running example] From Example <ref> we have that, for G^0, the following hold: r1 = 3, r2 = -2, c1 = 2, and c2 = -3. Some well-known results from game theory for bimatrix games can be recast using the concept of xi. For example, the following proposition gives an equivalent formulation of the degeneracy criterion for 2 × 2 bimatrix games[Proofs for all results appear in the Appendix.]: A 2× 2 bimatrix game G is degenerate if and only if xi = 0 for some x ∈{r,c}, i ∈{1,2}. When a non-degenerate 2 × 2 bimatrix game has a mixed Nash equilibrium, then its value is determined by xi: Consider a non-degenerate 2× 2 bimatrix game G = N,S,P, for P = (P_r;P_c). If (p,1-p) ∈ NE_x(G) for some 0 < p < 1, x∈{r,c}, then: p = x̅2/x̅2 - x̅1 Now consider a non-degenerate 2 × 2 bimatrix game G and some player x∈{r,c}. From classical results in game theory, we know that there are 4 possible cases for NE_x(G), namely NE_x(G) = {(1,0)}, NE_x(G) = {(0,1)}, NE_x(G) = {(p,1-p)} for some 0 < p < 1 and NE_x(G) = {(1,0), (0,1), (p,1-p)} for some 0 < p < 1. If the game is degenerate, then there is one additional possibility, namely that NE_x(G) = {(p,1-p) | 0 ≤ p ≤ 1} = Σ_x. For non-degenerate games, the value of NE_x(G) can be determined using the following: * NE_x(G) = {(1,0)} if and only if s_1 is dominant for x, or s_i is dominant for x̅ and s_1 is the best response for x on s_i. * NE_x(G) = {(0,1)} if and only if s_2 is dominant for x, or s_i is dominant for x̅ and s_2 is the best response for x on s_i. * NE_x(G) = {(p,1-p)} for some 0 < p < 1 if and only if no strategy is dominant for either player and no pure Nash equilibrium exists. * NE_x(G) = {(1,0), (0,1), (p,1-p)} for some 0 < p < 1 if and only if no strategy is dominant for either player and two pure Nash equilibria exist. The above conditions can also be expressed in terms of xi, as shown in Table <ref>. In the table, the various (mutually exclusive) cases are visualised for player r and for a non-degenerate game. The small figure in the rightmost column shows the depicted condition in terms of the relative order among the elements of P_r (blue lines) or P_c (yellow lines), which is determined by the sign (positive or negative) of xi. The first column provides a reference to the formulation of Proposition <ref>, where the above are formally stated and proved. Before showing Proposition <ref>, for brevity, we introduce the following predicates to refer to the different cases with regards to the value of NE_x(G): * Only-pure: Gxi, which is true if and only if the only equilibrium strategy for player x in game G is to play s_i, i.e.: Gx1 iff NE_x(G) = {(1,0)} and Gx2 iff NE_x(G) = {(0,1)} * Only-mixed: Gxp, which is true if and only if the only equilibrium strategy for player x in game G is (p, 1-p) (where 0 < p < 1), i.e.: Gxp iff NE_x(G) = {(p,1-p)} * Pure-and-mixed: Gxp, which is true if and only if player x has 3 equilibrium strategies in game G, two pure and one mixed, and the mixed one is (p,1-p) (where 0 < p < 1), i.e.: Gxp iff NE_x(G) = {(1,0),(0,1),(p,1-p)} * Ranged-only-mixed: Gxω_1ω_2, which is true if and only if Gxp is true for some ω_1 < p < ω_2, i.e.: Gxω_1ω_2 iff NE_x(G) = {(p,1-p)} for some p such that ω_1 < p < ω_2 * Ranged-pure-and-mixed: Gxω_1ω_2, which is true if and only if Gxp is true for some ω_1 < p < ω_2, i.e.: Gxω_1ω_2 iff NE_x(G) = {(1,0),(0,1),(p,1-p)} for some p such that ω_1 < p < ω_2 * Infinite-Nash: Gx, which is true if and only if player x has an infinite number of equilibrium strategies, namely the entire Σ_x (note that this is possible only for degenerate games), i.e.: Gx iff NE_x(G) = Σ_x When the game G is obvious from the context, we will omit the superscript G from the above. Now we can formally state Proposition <ref>, which formalises the intuition of Table <ref>: For any non-degenerate 2 × 2 bimatrix game the following hold: * x1 if and only if either one of the following is true: * (x1 > 0) ⋀ (x2 > 0) * (x1 > 0) ⋀ (x2 < 0) ⋀ (x̅1 > 0) ⋀ (x̅2 > 0) * (x1 < 0) ⋀ (x2 > 0) ⋀ (x̅1 < 0) ⋀ (x̅2 < 0) * x2 if and only if either one of the following is true: * (x1 < 0) ⋀ (x2 < 0) * (x1 < 0) ⋀ (x2 > 0) ⋀ (x̅1 > 0) ⋀ (x̅2 > 0) * (x1 > 0) ⋀ (x2 < 0) ⋀ (x̅1 < 0) ⋀ (x̅2 < 0) * xp if and only if p = x̅2/x̅2 - x̅1 and either one of the following is true: * (x1 > 0) ⋀ (x2 < 0) ⋀ (x̅1 < 0) ⋀ (x̅2 > 0) * (x1 < 0) ⋀ (x2 > 0) ⋀ (x̅1 > 0) ⋀ (x̅2 < 0) * xp if and only if p = x̅2/x̅2 - x̅1 and either one of the following is true: * (x1 > 0) ⋀ (x2 < 0) ⋀ (x̅1 > 0) ⋀ (x̅2 < 0) * (x1 < 0) ⋀ (x2 > 0) ⋀x̅1 < 0) ⋀ (x̅2 > 0) An analogous set of conditions determines whether the “ranged” versions of the above predicates are true: Given a non-degenerate 2× 2 bimatrix game G, the following hold: * xω_1ω_2 if and only if ω_1 < x̅2/x̅2 - x̅1 < ω_2 and either one of the following is true: * (x1 > 0) ⋀ (x2 < 0) ⋀ (x̅1 < 0) ⋀ (x̅2 > 0) * (x1 < 0) ⋀ (x2 > 0) ⋀ (x̅1 > 0) ⋀ (x̅2 < 0) * xω_1ω_2 if and only if ω_1 < x̅2/x̅2 - x̅1 < ω_2 and either one of the following is true: * (x1 > 0) ⋀ (x2 < 0) ⋀ (x̅1 > 0) ⋀ (x̅2 < 0) * (x1 < 0) ⋀ (x2 > 0) ⋀ (x̅1 < 0) ⋀ (x̅2 > 0) [Running example] Continuing Example <ref>, we note that the signs of the various x i (as computed in Example <ref>) indicate that case (4a) of Table <ref> holds. Thus, the actual game G^0 has both pure and mixed Nash equilibria. Given that c 2/ c2 - c1 = 0.6, it follows that r0.6 is true. Analogously, since r 2/ r2 - r1 = 0.4, it follows that c0.4 is true. §.§ Misinformation Games In this subsection, we provide necessary and sufficient conditions for a misinformation game to be (inverse-)ε-misinformed. These are given in Propositions <ref>, <ref>, and use the notation previously introduced. Note that the propositions apply for all canonical misinformation games, not just noisy games. The results of this subsection are summarized in Table <ref>. Consider a canonical misinformation game mG = G^0,G^r,G^c, where G^0 is a 2 × 2 bimatrix game and G^r, G^c are non-degenerate. Then, mG is ε-misinformed if and only if, for all x ∈{r,c}, one of the following is true: * G^0xi and G^xxi for some i ∈{1,2} * G^0xp^0 for some 0 < p^0 < 1 and G^xxω_1ω_2, where ω_1 = max{0, p^0 - ε}, ω_2 = min{1, p^0 + ε} * G^0xp^0 for some 0 < p^0 < 1 and G^xx1⋁G^xx2⋁G^xxω_1ω_2⋁G^xxω_1ω_2, where ω_1 = max{0, p^0 - ε}, ω_2 = min{1, p^0 + ε} * G^0x Consider a canonical misinformation game mG = G^0,G^r,G^c, where G^0 is a 2 × 2 bimatrix game and G^r, G^c are non-degenerate. Then, mG is inverse-ε-misinformed if and only if, for all x ∈{r,c}, one of the following is true: * G^0xi and G^xxi⋁G^xx01 for some i ∈{1,2} * G^0xp^0 for some 0 < p^0 < 1 and G^xxω_1ω_2⋁G^xxω_1ω_2, where ω_1 = max{0, p^0 - ε}, ω_2 = min{1, p^0 + ε} * G^0xp^0 for some 0 < p^0 < 1 and G^xxω_1ω_2, where ω_1 = max{0, p^0 - ε}, ω_2 = min{1, p^0 + ε} * G^0x and ε > 0.5 and G^xxω_1'ω_2', where ω_1' = max{0, 1 - ε}, ω_2' = min{1, ε} §.§ Probabilities We will now exploit the results of the previous subsections, in order to compute the probabilities associated to various events, eventually leading up to the computation that a given normal noisy game mG is (inverse-)ε-misinformed. The results are summarized in Table <ref>, whereas intermediate results necessary to compute the above probabilities appear in Tables <ref> and <ref>. For a normal noisy game mG ∼ G^0 + M,, we define the family of random variables yxi, such that, for any x,y ∈{r,c}, i ∈{1,2}: yxi = [G^y]xi Applying formula (<ref>) from Subsection <ref>, we observe that yxi∼μ_yxi, _yxi for μ_yxi, _yxi as shown in Table <ref>. The cdf and pdf of yxi (as resulting from formula (<ref>) in Subsection <ref>), as well as the probabilities for yxi taking certain values are also shown in the same table. [Running example] Continuing our running example (Example <ref>), we can now compute the distribution followed by the random variables yxi for x,y ∈{r,c}, i ∈{1,2}, using Table <ref>. Indeed, by Table <ref>, μ_ yxi = xi and _ yxi = √(2) for all x,y ∈{r,c}, i ∈{1,2}. As a more specific example, let us consider r r 1, which corresponds to the random variable representing the utility gain of strategy s_1 (as opposed to s_2), for player r under the viewpoint of player r. By Table <ref>, and the values for M, (Example <ref>), we conclude that μ_ rr1 = 3, _ rr1 = √(2). Analogously, we can compute the rest. From this, and formulas <ref> in subsection <ref>, we get that the pdf and cdf of rr1 are: f_ rr1(x) = 1 2 √(π) e^- ( x - 32)^2 and F_ rr1(x) = 1√(2π)∫^x-3√(2)_-∞ e^-t^2/2 dt Propositions <ref> and <ref> can now be employed to determine the probability that NE_x(G^x) takes a certain value, based on the probabilities that yxi take certain values. More precisely, Lemma <ref> is the counterpart of Proposition <ref>: In any normal noisy game mG = G^0, G^r, G^c, the probability that G^x is degenerate (for x ∈{r,c}) is 0. To formulate the counterpart of Proposition <ref>, the following lemma will prove helpful: Consider two independent random variables X ∼μ_X, _X, Y ∼μ_Y, _Y, with pdfs f_X,f_Y respectively, and some Ω_1, Ω_2 ∈ℝ∪{-∞} such that -∞≤Ω_1 < Ω_2 ≤ 0. Then: Ω_1 ≤X/Y≤Ω_2, X < 0, Y > 0 = ∫_0^+∞(∫_Ω_1 y^Ω_2 y f_X(x) dx ) f_Y(y)/y dy Ω_1 ≤X/Y≤Ω_2, X > 0, Y < 0 = ∫_-∞^0(∫_Ω_1 y^Ω_2 y f_X(x) dx ) f_Y(y)/y dy The next proposition determines the probability that NE_x(G^x) will have each of its possible values (see also Table <ref>): Consider a normal noisy game mG ∼ G^0 + M,, and some x ∈{r,c}. Then, the probabilities G^xx1, G^xx2, G^xxω_1ω_2, G^xxω_1ω_2 and G^xx are as shown in Table <ref>. [Running example] Continuing Example <ref>, we can now compute the probabilities that each of the robotic agents will believe that they play a game with pure, mixed or pure and mixed strategies. We start with the computation of the relevant F_ yxi(0) quantities for x,y ∈{r,c}, i ∈{1,2}, which are based on the respective pdf/cdf that were computed in Example <ref>: * For the r agent: F_rr1(0) = 0.017, F_rr2(0) = 0.921, F_rc1(0) = 0.078, F_rc2(0) = 0.983 * For the c agent: F_cc1(0) = 0.078, F_cc2(0) = 0.983, F_cr1(0) = 0.921, F_cr2(0) = 0.016 Regarding the two double integrals in the third and fourth formulas in Table <ref> our computations yield the values 0.001 and 0.229 for r respectively. Similarly, for c we take the values 0.001 and 0.189. Using the above results it is now easy to compute the following quantities, using the formulas of Table <ref>: * For the r agent: G^rr1 = 0.091, G^rr2 = 0.085, G^rr 0.50.7 = 0.001, G^rr 0.50.7 = 0.207 * For the c agent: G^cc1 = 0.091, G^cc2 = 0.085, G^cc 0.30.5 = 0.001, G^cc 0.30.5 = 0.171 Not unexpectedly, the largest probability is that the agents retain the behaviour predicted by the original game (i.e., playing pure and mixed), but not significantly so. Proposition <ref> (and the respective Table <ref>), combined with Proposition <ref> (and the respective Table <ref>) easily leads to the following theorems (summarized in Table <ref>): Consider a normal noisy game mG ∼ G^0 + M,. Then: mG: ε-misinformed = 𝒫_r^mis·𝒫_c^mis where, for x ∈{r,c}, 𝒫_x^mis is determined by the second column of Table <ref>. Consider a normal noisy game mG ∼ G^0 + M,. Then: mG: inverse-ε-misinformed = 𝒫_r^inv·𝒫_c^inv where, for x ∈{r,c}, 𝒫_x^inv is determined by the third column of Table <ref>. [Running example] Returning to our running example (Example <ref>), let us now compute the probability for the respective noisy game to be (inverse-)ε- misinformed, for ε = 0.1. To do so, we plug in the formulas from Table <ref> into Theorems <ref>-<ref>, and, using the previously computed probabilities from Example <ref>, we take: 𝒫_r^mis = 0.386, 𝒫_r^inv = 0.207, 𝒫_c^mis = 0.349, 𝒫_c^inv = 0.171 Thus, mG: ε-misinformed = 0.135 mG: inverse-ε-misinformed = 0.035. Thus, the conclusion of this analysis is that, the “Battle of the Sexes” (BoS) game, when receiving noise that follows the standard normal distribution (0,1) in each of its payoffs and for each player, will be 0.1-misinformed with probability 13.5% and inverse-0.1-misinformed with probability 3.5%. Note that the original BoS game has 3 Nash equilibria, two pure and one mixed. Thus, the above results imply that, by defining closeness using ε = 0.1: * With probability 13.5%, all (misinformed) equilibrium points of the noisy game will be close to one of the expected equilibria (in BoS). Thus, under this probability, the agents will have one or more natural misinformed equilibria, all of which will be close to one of the BoS' Nash equilibria. This means that, with probability 13.5%, the agents' behaviour (no matter which of their equilibrium points they choose) will be close to the expected one. * With probability 3.5%, for each of the three equilibria of BoS, there will be at least one (misinformed) equilibrium point that is close to it. Thus, under this probability, there will be at least 3 natural misinformed equilibria, although the agents may also have other (misinformed) equilibria as well that are not close to any Nash equilibrium of BoS. This means that, with probability 3.5%, all equilibria of BoS are within the valid options (modulo the closeness assumption) for the agents. § RESULTS FOR NOISY GAMES The results of Section <ref> provide the formulas to compute the probability of a given normal noisy game to be (inverse-)ε-misinformed (i.e., behaviourally consistent). In this section, we explore the properties of these formulas, to understand better their behaviour. To do so, we first observe that the probability of a normal noisy game mG ∼ G^0 + M, being behaviourally consistent is essentially a function of: * The tolerance ε. * The payoff matrix of the actual game of mG. This affects the probabilities in two ways: first, because it determines the equilibria of G^0, and, thus, the case to consider in Table <ref>; second, because it affects μ_yxi (see Table <ref>). * The noise pattern, determined by the matrices M,. In the following subsections, we study the effect of each of these parameters on the probability of mG being (inverse-)ε-misinformed. §.§ Effect of modifying tolerance () With regards to tolerance (ε), we expect that larger values of tolerance would translate to higher probability of behavioural consistency. Although this is true, we also observe that there are several cases where increasing tolerance does not affect the probability of behavioural consistency. The following proposition clarifies the situation: Consider some mG ∼ G^0 + M, and ε_1, ε_2, such that 0 ≤ε_1 < ε_2. Then: * If NE(G^0) contains a single pure strategy, then: * mG: ε_1 -misinformed = mG: ε_2 -misinformed * mG: inverse-ε_1 -misinformed = mG: inverse-ε_2 -misinformed * If NE(G^0) is finite and ((p^0,1-p^0),(q^0,1-q^0)) ∈ NE(G^0) for some 0 < p^0 < 1, 0 < q^0 < 1, then: * If max{p^0, q^0, 1-p^0, 1-q^0}≤ε_1, then: * mG: ε_1 -misinformed = mG: ε_2 -misinformed * mG: inverse-ε_1 -misinformed = mG: inverse-ε_2 -misinformed * If max{p^0, q^0, 1-p^0, 1-q^0} > ε_1, then: * mG: ε_1 -misinformed < mG: ε_2 -misinformed * mG: inverse-ε_1 -misinformed < mG: inverse-ε_2 -misinformed * If NE(G^0) is infinite, then: * If ε_1 ≥ 1 or ε_2 ≤ 0.5, then: * mG: ε_1 -misinformed = mG: ε_2 -misinformed * mG: inverse-ε_1 -misinformed = mG: inverse-ε_2 -misinformed * If ε_1 < 1 and ε_2 > 0.5, then: * mG: ε_1 -misinformed = mG: ε_2 -misinformed * mG: inverse-ε_1 -misinformed < mG: inverse-ε_2 -misinformed Proposition <ref> has several interesting consequences. First, we note that the probability for a given mG to be (inverse-)ε-misinformed is non-decreasing with respect to ε. When there is a pure Nash equilibrium, the choice of ε is irrelevant to the value of these probabilities. When there is a mixed Nash equilibrium (case 2 of the proposition), there is a limit above which ε does not affect the value of the related probability; this limit depends on the actual mixed equilibrium, but it is always equal to, or larger than 0.5, and smaller than 1. Finally, in the case where there is an infinite number of equilibria, ε affects the probabilities only for certain values (between 0.5 and 1, and only for the inverse-ε-misinformed case), as detailed in case 3 of Proposition <ref>. These are summarised in Table <ref>. Our results (and Table <ref>) indicate that the minimal value for the probability of mG being (inverse-)ε-misinformed is given for ε = 0. Its maximal value is taken for an appropriate ε (depending on the case); in all cases ε = 1 would also give that maximal value. These maximal/minimal values can be easily deduced by Table <ref> for the above choices of ε, and are given in Table <ref> for convenience. Note that the actual result for the minimal/maximal values results by multiplying 𝒫_r^mis with 𝒫_r^mis, and 𝒫_r^inv with 𝒫_r^inv for ε-misinformed and inverse-ε-misinformed respectively. Another important result (albeit relatively obvious) is that the probability of mG being (inverse-)ε-misinformed, viewed as a function of ε, is continuous. This is a direct consequence of the results in Tables <ref>, <ref>, <ref>. An important consequence of this fact, by well-known results of calculus, is that, for any given target value for the probabilities of (inverse-)ε-misinformed (within the bounds shown in Table <ref>), there exists some ε whose application would result to that value for the respective probability. §.§ Effect of changing the game () and the mean () Consider a misinformation game mG ∼ G^0 + M,, and let us informally ponder on the effect of bias in the noise of a game. A biased noise is noise whose mean M is non-zero, i.e., M ≠0. Let us consider only player r, for simplicity. In such a scenario, we know that G^r ∼ G^0 + M^r, ^r. Observe that this is the same as writing G^r ∼ (G^0 + M^r) + 0,^r. Using this simple reasoning, the computation of the probabilities of behavioural consistency for mG for biased noise can be reduced to computations related to some mG with unbiased noise (M = 0), whose actual game will be the sum of G^0 and M^x. However, there are two caveats here. First, since M^r may be different than M^c, our original misinformation game is essentially reduced to two different misinformation games (say mG_r, mG_c), i.e., one per player. Second, in the case where the equilibria of G^0 are different than the equilibria of G^0 + M^x, care should be taken to consult the proper line in Table <ref> while computing the probability of mG being (inverse-)ε-misinformed. In particular, the line to consider should be the one related to the equilibria of G^0, not G^0 + M^x. This means that the probability of mG being (inverse-)ε-misinformed may not be the same as the respective probability for mG_r, mG_c. To prove the above ideas formally, we start with the following proposition: Consider two noisy games mG ∼ G^0 + M,, mG∼G^0 + M,. Suppose that there exists a ∈ℝ, x ∈{r,c} such that G^0 + M^x = G^0 + M^x + a. Then: * For any i ∈{1,2}, G^xxi = G^xxi * For any 0 ≤ω_1 ≤ω_2 ≤ 1, G^xxω_1ω_2 = G^xxω_1ω_2 * For any 0 ≤ω_1 ≤ω_2 ≤ 1, G^xxω_1ω_2 = G^xxω_1ω_2 Proposition <ref> implies that, given a noisy game mG ∼ G^0 + M, and a player x ∈{r,c}, we can generate some other noisy game (say mG), whose probabilities related to the various outcomes (equilibria) of the game G^x of mG are identical to the respective ones for G^x (in mG). As a matter of fact, there is an infinite number of noisy games that satisfy this property: for any given G^0 we can find an infinite number of M that do this, and for any given M we can find an infinite number of G^0 that do this. This observation motivates us to consider some interesting special cases, formalised as corollaries below. The first interesting case is when M = 0. Given a noisy game mG, the following corollary shows that the probabilities related to the various outcomes (equilibria) of the game G^x in mG can be predicted by looking at a properly defined noisy game mG where the noise is unbiased (i.e., M = 0). Formally: Consider a noisy game mG ∼ G^0 + M,, and some x ∈{r,c}. Set G^0 = G^0 + M^x, and mG∼G^0 + 0,. Then: * For any i ∈{1,2}, G^xxi = G^xxi * For any 0 ≤ω_1 ≤ω_2 ≤ 1, G^xxω_1ω_2 = G^xxω_1ω_2 * For any 0 ≤ω_1 ≤ω_2 ≤ 1, G^xxω_1ω_2 = G^xxω_1ω_2 Combining Corollary <ref> with Theorems <ref>, <ref>, it is easy to compute the probability that mG is (inverse-)ε-misinformed, using the respective probabilities for mG. This is one of the main results of this subsection, as it allows us to restrict our study to noisy games with unbiased noise only. An interesting observation is that Corollary <ref> applies for some x ∈{r,c}. Thus, we need to define two different mG (one for each player x ∈{r,c}) in order to compute the probability that mG is (inverse-)ε-misinformed. The following corollary holds for both x ∈{r,c} (and thus foregoes this need), but applies only when M^r = M^c, i.e., when the noise received by the two players has the same bias: Consider a noisy game mG ∼ G^0 + M,, where M = (M^*;M^*). Set G^0 = G^0 + M^*, and mG∼G^0 + 0,. Then: * For any i ∈{1,2} and x ∈{r,c}, G^xxi = G^xxi * For any 0 ≤ω_1 ≤ω_2 ≤ 1 and x ∈{r,c}, G^xxω_1ω_2 = G^xxω_1ω_2 * For any 0 ≤ω_1 ≤ω_2 ≤ 1 and x ∈{r,c}, G^xxω_1ω_2 = G^xxω_1ω_2 Proposition <ref> and Corollary <ref> provide the probability of the different events to occur (e.g., the probability that G^x has a certain equilibrium), but do not directly provide the probability for mG being (inverse-)ε-misinformed. Indeed, since G^0 and G^0 may have different equilibria, the computation of the probabilities for mG and mG being (inverse-)ε-misinformed may use different rows in Table <ref>. This is unnecessary only when the two games have the same equilibria: Consider a noisy game mG ∼ G^0 + M,. Set: G^0 = G^0 + M^r, G^0 = G^0 + M^c, mG∼G^0 + 0,, mG∼G^0 + 0, If NE(G^0) = NE(G^0) = NE(G^0) then: * mG: ε-misinformed = 𝒫_r^mis·𝒫_c^mis * mG: inverse-ε-misinformed = 𝒫_r^inv·𝒫_c^inv where 𝒫_r^mis, 𝒫_r^inv, 𝒫_c^mis, 𝒫_c^inv are the probabilities of Table <ref> for mG, mG respectively. Note that, in Corollary <ref>, the computation of the probability for mG to be (inverse-)ε-misinformed, occurs via the combination of quantities from two different noisy games (mG, mG). As with Corollary <ref>, this can be avoided when the noise received by the two players has the same bias, in which case we get a direct computation of the related probability: Consider the noisy game mG ∼ G^0 + M,, where M = (M^*;M^*). Set G^0 = G^0 + M^* and mG∼G^0 + 0,. If NE(G^0) = NE(G^0) then: * mG: ε-misinformed = mG: ε-misinformed * mG: inverse-ε-misinformed = mG: inverse-ε-misinformed Corollary <ref> is the most specific result, as it gives us a method of computing the probabilities of a noisy game being (inverse-)ε-misinformed using the respective probabilities of another noisy game, under specific assumptions. The last proposition of this subsection follows easily from Proposition <ref>, and shows an elegant, and expected, property of noisy games. In particular, changing the payoff matrix of a game by adding any fixed constant number to all payoffs, does not modify the probability of the respective noisy game to be (inverse-)ε-misinformed (for a fixed noise pattern). This is expected, because the addition of a fixed number in the payoffs does not change the structure of the game, and, thus, the two games are considered “equivalent” in standard game theory. The proposition below proves a more complex version of this statement, showing that the same is true for the noise pattern: adding a fixed amount of bias across the board does not modify the respective probabilities. Formally: Consider a noisy game mG ∼ G^0 + M,, and constant numbers a_G, a_r, a_c ∈ℝ. Set G^0 = G^0 + a_G and M = (M^r;M^c), where M^x = M^x + a_x for x ∈{r,c}. Moreover, set mG∼G^0 + M,. Then: * mG: ε-misinformed = mG: ε-misinformed * mG: inverse-ε-misinformed = mG: inverse-ε-misinformed §.§ Effect of modifying noise intensity () Although adding a fixed constant number to the game's payoffs does not modify the respective probabilities (Proposition <ref>), this is not the case when changing the “scale” of a game (by multiplying all its payoffs by a constant number, say λ > 0). In particular, changing the scale of a game will affect its “resilience” to noise, without changing the game's properties and behaviour, because it increases the “amount of noise” necessary to change the sign of the various yxi. As a matter of fact, multiplying the payoffs by a sufficiently large number would minimize the effect of the noise, as its effects on the payoffs would be, comparatively smaller (analogously, using a sufficiently small positive number would maximize the effect of the noise). In Proposition <ref> (and especially in Corollary <ref>), we quantify this effect, by showing that we need to multiply the noise intensity (standard deviation) by λ^2 in order for the noise to have the same effect on a game scaled by λ. Formally: Consider a normal noisy game mG ∼ G^0 + M, and some λ > 0. Set: G^0 = λ G^0, M = λ M and = λ^2, and consider the normal noisy game mG = G^0 + M,. Then: * mG: ε-misinformed = mG: ε-misinformed * mG: inverse-ε-misinformed = mG: inverse-ε-misinformed An obvious and interesting corollary of Propositions <ref> and <ref> is the following: Consider a normal noisy game mG ∼ G^0 + 0, and some λ > 0, k ∈ℝ. Set: G^0 = λ G^0 + k and = λ^2, and consider the normal noisy game mG = G^0 + 0,. Then: * mG: ε-misinformed = mG: ε-misinformed * mG: inverse-ε-misinformed = mG: inverse-ε-misinformed From the previous theoretical results, the effect of noise in the outcome of an abstract 2 × 2 bimatrix game has the following characteristics: for small values of the noise intensity (standard deviation), players almost surely have the same behaviour as in the actual game, whereas for large noise intensity, the behaviour of players cannot be predicted as their games will be almost random. Also, observe that the formulas giving the probabilities for (inverse)-ε-misinformed are continuous with respect to the standard deviation. Given the above, one would expect that, by increasing the standard deviation, we would monotonically transit from the first extreme to the second. However, this does not always hold, as the following counter-example shows. example1 Consider as actual game the classical Prisoner's Dilemma (see Figure <ref>), which has a pure Nash equilibrium with strategy profile ((0,1),(0,1)). We produce a noisy game, in which the noise only affects the upper left elements of the actual payoff matrix, where we add noise according to a random variable following the normal distribution 0,^2. From Theorems <ref>, <ref>, we can compute the probabilities for this game to be (inverse-)ε-misinformed. The result is shown in Figure <ref>, where we plot mG: ε-misinformed (blue line) and mG: inverse-ε-misinformed (orange line), for ∈ (0,10). As is obvious by this figure, these functions are not monotonic with respect to . § DISCUSSION AND EXPERIMENTS In this section we report on experiments that validate our basic results, and we investigate the effect of noise on the players' decisions, for the four 2 × 2 bimatrix games shown in Figure <ref>. The games were chosen to capture the following cases: i) dominant equilibrium (Prisoner's Dilemma), ii) unique mixed Nash equilibrium (Matching Pennies), iii) multiple Nash equilibria (Battle of the Sexes), and iv) dominant equilibrium that coincides with the optimal outcome (Win-Win). §.§ Theoretical and Experimental Computation of the Probability that a Game is (Inverse-)ε-misinformed We consider that the actual game undergoes an additive noise that follows the normal distribution 𝒩( 0, ^2) where ∈{ 0.001, 0.5, 1, …, 10}. We compare the theoretical values of probabilities that we get from Theorems <ref>, <ref>, with the respective values calculated through Monte Carlo simulations. The Monte Carlo simulations were conducted as follows: we generate a game G^0, which can be one of the four games shown in Figure <ref>. Then, for each of the above values for , we create the respective noisy game mG=G^0+ 0, ^2. To be more precise, we generate a misinformation game, where the misinformation stems from the incorporation of additive noise stemming from one random experiment that follows the above distribution (0, ^2). We derive the natural misinformed equilibrium and check about ε-closeness. We perform 3,000 repetitions of the above process and calculate: a) the percentage of games that are ε-misinformed (i.e., all nmes of mG are ε-close to one Nash equilibrium of G^0, according to the first bullet of Definition <ref>), b) the percentage of games that are inverse ε-misinformed (i.e., all Nash equilibria of G^0 are ε-close to one nme of mG, according to the second bullet of Definition <ref>). We repeat the simulations for two different values of ε (ε∈{10^-2, 10^-3}). The results are shown in Figures <ref>-<ref>. As the Prisoner's Dilemma and the Win-Win games both have a unique pure Nash equilibrium, their behavioural consistency is similar. Hence, Figure <ref> shows both cases. In all subplots, we have plots of two colours. The blue ones depict the computations for ε = 10^-2, whereas the red ones depict the computations for ε = 10^-3. In part (a) of the figures, the horizontal axis depicts the different values for the standard deviation of the noise, and the vertical axis depicts the probability of a game being ε-misinformed according to Theorem <ref> (solid line) or the probability of a game being ε-misinformed according to the Monte Carlo simulations (dotted lines). The same hold for part (b) of the figures, but for the inverse-ε-misinformed case (Theorem <ref>). In both subfigures, a high value of the probability calculated in the vertical axis implies a small effect of noise on players' decisions. As expected, the theoretical results are very close to the experimental ones. These figures give rise to various remarks concerning the influence of noise to players' strategic choices. Although some general patterns emerge, the effect of noise in the behavioural consistency of the game greatly depends on the type and number of Nash equilibria that it has, so we split our analysis in 3 different cases. Case 1: Unique Pure Nash equilibrium The case of a unique pure Nash equilibrium appears in the Prisoner's Dilemma and Win-Win games, whose behaviour is depicted in Figure <ref>. For Prisoner's Dilemma, we observe that, for small values of the standard deviation (≪ 1), the nme of mG will usually be the same as the NE of the original game (G^0). Thus, both probabilities [emis]mG; ε and [invemis]mG; ε will have values close to 1. As increases, noise will produce misinformation games G^r, G^c with different Nash equilibria than that of the actual game (different means non-close, by definition, in this case) with an increasing probability, thereby reducing the probability for behavioural consistency. As increases further, each of the different possible sets of equilibria will appear with almost equal probability in G^r, G^c, leading to a convergence in the plots of Figure <ref>. In particular, [emis]mG; ε converges to approximately 14%, whereas [invemis]mG; ε converges to approximately 25%. This can be theoretically predicted by observing Table <ref>. For a large enough noise, the original orderings among the elements of the payoff matrix become increasingly irrelevant, and the actual orderings in each of G^r, G^c become totally random. As a result, the equilibrium strategy for r in G^r will be a pure one with a probability of 6/8 (3/8 for each strategy), a mixed one with 1/8 probability, and pure-and-mixed with 1/8 probability. The same is true of course for c in G^c. Combining these observations with Table <ref> and Theorems <ref>, <ref>, we get the above numbers for the convergence of [emis]mG; ε, [invemis]mG; ε. Similar remarks hold for the Win-Win game that has one pure Nash equilibrium strategy profile (namely, ((1, 0), (0, 1))). Case 2: Unique Mixed Nash equilibrium The case of a unique mixed Nash equilibrium appears in the Matching Pennies game, which has one Nash equilibrium strategy profile ((1/2, 1/2), (1/2, 1/2)), and whose behaviour is depicted in Figure <ref>. As in case 1, we observe that for small values of the standard deviation (≪ 1), mG will have the same nme as the NE in G^0. Thus, both probabilities [emis]mG; ε and [invemis]mG; ε will have values close to 1. As increases, noise will produce games G^r, G^c with different Nash equilibria than that of the actual game G^0, and the respective probabilities fall sharply (much faster compared to the Prisoner's Dilemma case), converging to a value close to 0 for large values of the standard deviation. This is explained by the fact that, although a mixed nme is achieved in some of the produced games, this is often not close to the actual mixed one, leading to games that are (usually) not (inverse-)ε-misinformed. For example, for ε = 10^-2, the function [emis]mG; ε convergences at around 0.03%. Case 3: Multiple Nash equilibria The case of two pure and one mixed Nash equilibrium appears in the Battle of the Sexes game, whose behaviour is depicted in Figure <ref>. The Nash equilibrium strategy profiles of Battle of the Sexes are: { ((1,0), (1,0)), ((0,1), (0,1)), ((2/3,1/3), (1/3,2/3)) }. Unlike other games, we observe that the Battle of the Sexes has zero probability of being ε-misinformed for small values of . This is explained by the fact that, for small values of , G^r, G^c will be very similar to G^0, each giving 3 equilibrium strategies (for the respective player). Thus, there are 9 nmes, one for each combination of equilibrium strategies (see Definition <ref>), so some of them will not be ε-close to one of the three equilibria of G^0. By Definition <ref> this means that the respective game is not ε-misinformed, so [emis]mG; ε will be close to 0. As increases, and the games G^r, G^c become less and less predictable, the probability of being ε-misinformed becomes larger, reaching a plateau at around 72%. The explanation here is analogous to the one given for the other two cases: in order for a misinformation game to not be ε-misinformed, it should either have one pure equilibrium (but not one of the two that are in the equilibria of G^0), or it should have one mixed equilibrium (but not ε-close to the one of G^0). Based on the analysis of the Prisoner's Dilemma game, the probability of the former is around 28%; based on the analysis of the Matching Pennies game, the probability of the latter is close to 0; combining these observations, we conclude that a plateau at around 72% is reasonable. For the inverse-ε-misinformed case (part (b) of Figure <ref>), small values of result to high values for [invemis]mG; ε, as expected. As increases, the probability decreases at a rate even faster than the one observed for Matching Pennies, eventually converging at a value close to 0. This is explained by the fact that, in order for the game to be inverse-ε-misinformed, it should have, among other things, also a mixed equilibrium that is close to the respective mixed of G^0. As we established in Case 2 above, this has a very low probability for large values of . §.§ Optimal strategy profiles in terms of efficiency In this subsection, we report on experiments that investigate whether the misinformation game mG that results from a given actual game G^0 has natural misinformed equilibria that are best or worst in terms of efficiency (social welfare). We then evaluate the effect of noise on each of the four games under consideration. We performed Monte Carlo simulations as in the previous section and calculated: a) the percentage P_best of misinformation games that have a natural misinformed equilibrium that maximizes social welfare (best nme), b) the percentage P_worst of misinformation games that have a natural misinformed equilibrium that minimizes social welfare (worst nme). We repeat the simulations for all values of in {0.02, 0.04, …, 10} and for ε = 10^-2. The results are shown in Figures <ref> and <ref>. In Matching Pennies, as it is a constant-sum game, all strategy profiles provide the same level of social welfare, so the respective line is flat, regardless of the value of (see Figures <ref> and <ref>). In other words, the noise has no effect with respect to the optimal outcome. In Prisoners' Dilemma, the best strategy profile is ((1, 0), (1, 0)) and the worst one is ((0, 1), (0, 1)) which coincides with the pure NE of the actual game G^0. We observe that, for small values of , only a few repetitions provide the best nme (Figure <ref>), while most of them provide the worst nme (Figure <ref>); this is in line with the results given in the previous subsection. As increases, the percentage of games resulting in the best strategy increases too, implying that noise has a positive effect on Prisoners' Dilemma. In the Battle of the Sexes, the best strategy profiles are ((1, 0), (1, 0)) and ((0, 1), (0, 1)) (these are also the pure Nash equilibria of the actual game), and the worst strategy profiles are ((1, 0), (0, 1)) and ((0, 1), (1, 0)). We observe that, for small values of , most of the misinformation games result in one of the best strategy profiles (Figure <ref>). As increases, this percentage decreases, implying that noise has a negative effect on the Battle of the Sexes: players are not forced to choose better strategies. In the Win-Win game, the best strategy profile is ((1, 0), (0, 1)) and the worst one is ((0, 1), (1, 0)). The same observations as in the Battle of the Sexes hold for the Win-Win game. To summarize, as the percentage P_best increases (or P_worst decreases) with respect to , noise is beneficial. This is the case for Prisoners' Dilemma. On the contrary, noise deteriorates the efficiency of the system if the percentage P_best decreases (or P_worst increases) with respect to as in Win-Win and Battle of the Sexes games. Finally, the efficiency of the system is independent of the noise in the Matching Pennies game. Given the above, as expected, noise deteriorates the social welfare in games where the original Nash equilibrium is already “good” for the social welfare (Battle of the Sexes, Win-Win), as it induces a more “random” behaviour. On the contrary, it improves the situation in games where the original equilibrium is “bad” (e.g., Prisoner's Dilemma). In constant sum games (e.g., Matching Pennies), noise has no effect with regards to the social welfare. §.§ PoM vs PoA In this subsection, we compare the price of anarchy PoA with the price of misinformation PoM for the four games of interest. Both metrics measure social welfare, with or without misinformation respectively, and take values that are higher than or equal to 1. Given a bimatrix game G with payoff matrix P = (P_r; P_c) we use Definition <ref> to compute PoM for all values of pairs (p,q), where p,q ∈ [0,1]. The values of p,q are non other than the values in the joint strategy profile σ = (𝐩⃗, 𝐪⃗) = ((p,1-p),(q, 1-q)). In formula <ref>, the quantities in the fraction are given by the formula SW(σ) = p^T(P_r + P_c) q. The respective graphs are shown in Figures <ref>-<ref>. We can make the following observations on social welfare planes of Figures <ref>-<ref> that present the range of values of PoM: * In Prisoner's Dilemma we note that the social welfare plane is monotonic (see Figure <ref>). The minimum value is in the bottom left corner (“bluest”) and the maximum value is in the upper right corner (“redest”). We know that the PoA in this game is 2, which is equal to the minimum social welfare, so any distortion in the payoff matrices of the game does not deteriorate the efficiency of the game, and PoM ≤ PoA, for every level of noise. * In Matching Pennies we observe that the social welfare plane is constant (Figure <ref>). That is, PoM remains constant as any combination of the values of the payoff matrix results in the same social welfare value. Thus, noise may affect the strategic behaviour of players, but keeps the social welfare constant. Note that, in zero-sum games such as Matching Pennies, the value of PoM and PoA cannot be calculated (the denominator of the respective formulas takes the value of zero). To mitigate this inconvenience we add proper values to each element of the payoff matrices and produce a constant-sum game, without affecting the strategic behaviour of players. * In Battle of the Sexes we observe that the two pure Nash equilibria of the game are the optimal strategic behaviours (Figure <ref>). Thus, PoA depends on the mixed Nash equilibrium, and noise could improve or degrade the efficiency of the system. * In Win-Win, the unique Nash equilibrium coincides with the optimal one, thus PoA = 1 (Figure <ref>). Therefore, any misinformation cannot improve the outcome of this game, and PoM ≥ PoA. § CONCLUSION AND FUTURE WORK In this paper we studied a novel game-theoretic setting, where players receive the information regarding the game's payoffs with a distortion that affects the elements of the payoff matrix. This distortion was assumed to be due to additive noise that follows a normal distribution, and could be due to communication errors that may appear when the game's parameters are communicated through a noisy channel, or when some malfunction in the sender or receiver distorts this information. In such noisy settings, it is possible that each player knows a different game compared to her opponent and compared to the actual (originally communicated) one. We model this situation using misinformation games, an appropriate theoretical setting introduced previously in <cit.>, and define a subclass of misinformation games called noisy games (see Section <ref>). The main problem considered in this setting is the computation of the probability for behavioural consistency, i.e., the probability that the agents' behaviour will be “close” (under some formal definition of closeness) to the one expected according to the original game, despite the noise. Towards this, two alternative formal definitions of behavioural consistency are given (Subsection <ref>) and the respective probabilities are computed in Section <ref>. Note that, due to the complexity of the formulas, we restricted ourselves to 2-player bimatrix games with 2 strategies per player. We elaborate on those formulas and prove a number of related results (Section <ref>), which help understand their properties. Such properties include the effect of the definition of closeness and/or the noise structure in the respective probabilities for behavioural consistency, as well as a study of how different interventions and modifications on the original game would affect these probabilities. Moreover, we perform several numerical experiments using four well-known bimatrix games as benchmarks (see Figure <ref>). Initially, we compare the probabilistic formulas with Monte Carlo simulation to verify their correctness. Then, we derive general remarks as to the efficiency of the system regarding the additive noise, in terms of social welfare. To do so, we use the Price of Misinformation metric, which is inspired by the well-known Price of Anarchy metric and quantifies how benevolent/malevolent is the misinformation caused by the noise with regards to game performance (related to social welfare). Undeniably, the 2 players' bimatrix games with 2 strategies per player is a very restricting setting. Unsurprisingly however, even in this simple setting our analysis highlighted the richness, intricacy and interdependence of the probabilistic events, mathematical objects and techniques that were involved, leading to complex mathematical computations and stiff formulations as regards the end results. Having said that, we plan to consider more complex settings in the future, i.e., scenarios with more than two players and/or scenarios where each player may have more than two strategies. Further, we could consider deriving analogous probabilistic formulas for other classes of noise distributions (e.g., poisson or laplacian). Moreover, an immediate future step is to provide tools to quantify the sensitivity of a game to random noise, i.e., determine “how much noise” the game can withstand so that the behaviour of the players remains close (under the sense of behavioural consistency) to the expected ones, with a certain probability. A related research question is how sensitivity is affected by inconsequential changes in the game specification (e.g., change of scale); in this direction, results like Proposition <ref> can help. This analysis could be used as a tool for game designers to improve their designs and make them more robust to unexpected circumstances and noise in the communication channels. § PROOFS FOR THE RESULTS APPEARING IN THE PAPER §.§ Normal Form Games <Ref> Let's consider G= N,S,P, for P=(P_c; P_c). Suppose that G is degenerate. By definition, there is a pure strategy (say s_i, by player x ∈{r,c}) that has two pure best responses. Suppose that x = r, i = 1. Then, since s_1,s_2 are equally preferred by c, it follows that P_c[1,1] = P_c[1,2], i.e., c1 = 0. The other cases (i.e., when x = c and/or i = 2) are analogous. For the opposite, suppose that r1 = 0. Then P_c[1,1] = P_c[1,2], so c has two pure best responses for the strategy s_1 of r, which means that G is degenerate. The proof is analogous for the other cases. <Ref> Suppose that x=r. From classical game theoretic results (e.g., see <cit.>, <cit.>), and our assumptions, we get that p will satisfy the following equation: p · P_c[1,1] + (1-p) · P_c[2,1] = p · P_c[1,2] + (1-p) · P_c[2,2] The result now follows trivially by solving this equation and applying the definition of ci. Analogously, for the case where x = c, we get the following equation: p · P_r[1,1] + (1-p) · P_r[1,2] = p · P_r[2,1] + (1-p) · P_r[2,2] Solving it, as above, will give the required result. <Ref> By Proposition <ref>, we conclude that xi≠ 0 for all x ∈{r,c}, i ∈{1,2}. This means that the different (mutually exclusive) cases of the formulation of the proposition cover all possible cases for a non-degenerate game (see also Table <ref>). Thus, it suffices to show the “only if” part for each different case. For (1a), note that player x will play (1,0) (i.e., s_1) regardless of the choice of x̅, so NE_x(G) = {(1,0)} and x1 is true. For (1b), note that the only Nash equilibrium of G is ((1,0),(1,0)), which proves the result. Next, (1c) is analogous to (1b). The cases (2a), (2b), (2c) are analogous to (1a), (1b), (1c) respectively. With regards to (3a), it can be easily shown that the game can have no pure Nash equilibrium. Thus, it must have a mixed one (by the result of Nash <cit.>). Moreover, it cannot have more than one mixed, as this would render it degenerate[Immediate consequence of Corollary 3.7 <cit.>.] (see <cit.>,<cit.>,<cit.>). Thus, NE_x(G) = {(p,1-p)}, for some 0 < p < 1. By Proposition <ref>, it follows that p = x̅2/x̅2 - x̅1, which shows the result. The case (3b) is analogous. For (4a), we observe that the values of xi imply that the game has exactly two pure Nash equilibria, namely: ((1,0),(1,0)) and ((0,1),(0,1)). By <cit.>, <cit.>, it must also have one (unique) mixed equilibrium, as we examine a non-degenerate case. Thus, NE_x(G) = {(1,0), (0,1), (p,1-p)} for some 0 < p < 1. Again, using Proposition <ref>, it follows that p = x̅2/x̅2 - x̅1, which shows the result. For (4b) the proof is analogous, except that here the pure Nash equilibria of G are: ((1,0),(0,1)) and ((0,1),(1,0)). §.§ Misinformation Games and Noisy games <Ref> By definition, mG is ε-misinformed if and only if for all σ^* = (σ_r^*, σ_c^*) ∈ NME(mG) there exists σ^0 = (σ_r^0,σ_c^0) ∈ NE(G^0) such that σ^*, σ^0 are ε-close. More formally: mG:ε misinformed ⇔∀σ^* = (σ_r^*, σ_c^*) ∈ NME(mG) ∃σ^0 = (σ_r^0,σ_c^0) ∈ NE(G^0): σ^* ∈σ^0 ⇔∀σ^* = (σ_r^*, σ_c^*) ∈ NME(mG), σ^* ∈NE(G^0) ⇔∀σ^* = (σ_r^*, σ_c^*) ∈ NME(mG) ( σ_r^* ∈NE_r(G^0)∧σ_c^* ∈NE_c(G^0)) ⇔∀σ_r^* ∈ NE_r(G^r), σ_r^* ∈NE_r(G^0)∧∀σ_c^* ∈ NE_c(G^c), σ_c^* ∈NE_c(G^0) ⇔∀ x ∈{r,c}∀σ_x^* ∈ NE_x(G^x), σ_x^* ∈NE_x(G^0) Now let us fix some x and consider the different cases with regards to NE_x(G^0): * If NE_x(G^0) contains a single pure strategy, i.e., G^0xi is true for some i ∈{1,2}, then the expression ∀σ_x^* ∈ NE_x(G^x), σ_x^* ∈NE_x(G^0) is true if and only if NE_x(G^x) contains the same pure strategy, and no other, i.e., if and only if G^xxi is true. * If NE_x(G^0) contains a single mixed strategy, i.e., G^0xp^0 is true for some 0 < p^0 < 1, then the expression ∀σ_x^* ∈ NE_x(G^x), σ_x^* ∈NE_x(G^0) is true if and only if NE_x(G^x) contains a single mixed strategy that is ε-close to (p^0,1-p^0), i.e., G^xxω_1ω_2 is true, where ω_1 = max{0, p^0 - ε}, ω_2 = min{1, p^0 + ε}. Note that the max, min are necessary to cater for the case where p^0 - ε, p^0 + ε are smaller than 0 or greater than 1, respectively. * If NE_x(G^0) contains two pure and one mixed strategies, i.e., G^0xp^0 is true for some 0 < p^0 < 1, then the expression ∀σ_x^* ∈ NE_x(G^x), σ_x^* ∈NE_x(G^0) is true if and only if NE_x(G^x) contains either a pure or a mixed strategy that is ε-close to (p^0,1-p^0). This is expressed by the expression in bullet #3 of the proposition. * If NE_x(G^0) = Σ_x, i.e., G^0x is true, then, no matter the contents of NE_x(G^x), the expression ∀σ_x^* ∈ NE_x(G^x), σ_x^* ∈NE_x(G^0) is true. This, combined with the fact that these are the only cases with regards to the value of NE_x(G^0), conclude the proof. <Ref> By definition, mG is inverse-ε-misinformed if and only if for all σ^0 = (σ_r^0, σ_c^*) ∈ NE(G^0) there exists σ^* = (σ_r^*,σ_c^*) ∈ NME(mG) such that σ^*, σ^0 are ε-close. More formally: mG: inverse-ε-misinformed ⇔∀σ^0 = (σ_r^0, σ_c^0) ∈ NE(G^0) ∃σ^* = (σ_r^*, σ_c^*) ∈ NME(mG): σ^* ∈σ^0 ⇔( ∀σ_r^0 ∈ NE_r(G^0) ∃σ_r^* ∈ NE_r(G^r): σ_r^* ∈NE_r(σ_r^0)) ∧( ∀σ_c^0 ∈ NE_c(G^0) ∃σ_c^* ∈ NE_c(G^c): σ_c^* ∈NE_c(σ_c^0)) ⇔∀ x ∈{r,c}∀σ_x^0 ∈ NE_x(G^0) ∃σ_x^* ∈ NE_x(G^x): σ_x^* ∈NE_x(σ_x^0) Now let us fix some x and consider the different cases with regards to NE_x(G^0): * If NE_x(G^0) contains a single pure strategy, i.e., G^0xi is true for some i ∈{1,2}, then the expression ∀σ_x^0 ∈ NE_x(G^0) ∃σ_x^* ∈ NE_x(G^x): σ_x^* ∈NE_x(σ_x^0) is true if and only if NE_x(G^x) contains the same pure strategy, possibly in addition to others, i.e., (given that G^x is non-degenerate) if and only if G^xxi⋁G^xx01 is true. * If NE_x(G^0) contains a single mixed strategy, i.e., G^0xp^0 is true for some 0 < p^0 < 1, then the expression ∀σ_x^0 ∈ NE_x(G^0) ∃σ_x^* ∈ NE_x(G^x): σ_x^* ∈NE_x(σ_x^0) is true if and only if NE_x(G^x) contains a mixed strategy that is ε-close to (p^0,1-p^0), possibly in addition to others, i.e., (given that G^x is non-degenerate) G^xxω_1ω_2 is true, where ω_1 = max{0, p^0 - ε}, ω_2 = min{1, p^0 + ε}. Note that the max, min are necessary to cater for the case where p^0 - ε, p^0 + ε are smaller than 0 or greater than 1, respectively. * If NE_x(G^0) contains two pure and one mixed strategies, i.e., G^0xp^0 is true for some 0 < p^0 < 1, then the expression ∀σ_x^0 ∈ NE_x(G^0) ∃σ_x^* ∈ NE_x(G^x): σ_x^* ∈NE_x(σ_x^0) is true if and only if NE_x(G^x) contains two pure and a mixed strategy that is ε-close to (p^0,1-p^0), i.e., (given that G^x is non-degenerate) G^xxω_1ω_2 is true, where ω_1 = max{0, p^0 - ε}, ω_2 = min{1, p^0 + ε}. * If NE_x(G^0) = Σ_x, i.e., G^0x is true, then, ∀σ_x^0 ∈ NE_x(G^0) ∃σ_x^* ∈ NE_x(G^x): σ_x^* ∈NE_x(σ_x^0) is true if and only if at least one of the strategies in NE_x(G^x) is ε-close to each strategy in NE_x(G^0). Given that NE_x(G^x) is finite (because G^x is non-degenerate), this can only hold if G^xxp^x for some p^x such that (p,1-p) ∈(p, 1-p) for all 0 < p < 1. From the latter, we conclude that ε≥ 0.5 and max{0, p^0 - ε} < p^x < min{1, p^0 + ε}, which leads to the requirement in bullet #4 of the proposition. This, combined with the fact that these are the only cases with regards to the value of NE_x(G^0), conclude the proof. §.§ Probabilities <Ref> The result is direct from Proposition <ref> and the fact that xyi = 0 = 0 for any x,y ∈{r,c}, i ∈{1,2}. <Ref> For the first result, we observe that, since Ω_1 < Ω_2 ≤ 0: Ω_1 ≤X/Y≤Ω_2 ⋀ X < 0 ⋀ Y > 0 ⇔Ω_1 ≤X/Y≤Ω_2 ⋀ Y > 0 Thus, it suffices to compute the probability of the latter (simpler) event. Now, set Z = X/Y. Then f_Z | Y(z | y) = f_X(z y), so: f_ZY(z,y) = f_Z | Y(z | y) · f_Y(y) = f_X(z y) · f_Y(y) Therefore: Ω_1 ≤XY≤Ω_2, X < 0, Y > 0 = Ω_1 ≤ Z ≤Ω_2, Y > 0 = ∫_0^+∞∫_Ω_1^Ω_2 f_ZY(z,y) dz dy = ∫_0^+∞∫_Ω_1^Ω_2 f_X(zy) · f_Y(y) dz dy = ∫_0^+∞(∫_Ω_1^Ω_2 f_X(zy) dz ) f_Y(y) dy = ∫_0^+∞(∫_Ω_1 y^Ω_2 y1/y f_X(x) dx ) f_Y(y) dy = ∫_0^+∞(∫_Ω_1 y^Ω_2 y f_X(x) dx ) f_Y(y)/y dy The proof of the second result is completely analogous. <Ref> The results on G^xxi (i∈{1,2}) are direct consequences of Proposition <ref>, the fact that yxi are normal random variables as described in Table <ref>, and the independence/mutual exclusiveness of the involved random variables (which allow us to use the restricted disjunction/conjunction formulas from formula (<ref>), Subsection <ref>). For the case of G^xxω_1ω_2, applying Corollary <ref>, we get that G^xxω_1ω_2 is true if and only if: ([G^x]x1 > 0) ⋀ ([G^x]x2 < 0) ⋀ ([G^x]x̅1 < 0) ⋀ ([G^x]x̅2 > 0) ⋀( ω_1 < [G^x]x̅2/[G^x]x̅2 - [G^x]x̅1 < ω_2 ) ⋁ ([G^x]x1 < 0) ⋀ ([G^x]x2 > 0) ⋀ ([G^x]x̅1 > 0) ⋀ ([G^x]x̅2 < 0) ⋀(ω_1 < [G^x]x̅2/[G^x]x̅2 - [G^x]x̅1 < ω_2 ) Obviously, the above disjunction contains mutually exclusive events, so the probability G^xxω_1ω_2 is the sum of the probability of each disjunct (by the restricted disjunctive formula – see formula (<ref>), Subsection <ref>). So, let us compute the probability of the first disjunct. We observe that the events [G^x]x1, [G^x]x2 are independent to each other and also independent to the other conjuncts. Moreover: ω_1 < [G^x]x̅2/[G^x]x̅2 - [G^x]x̅1 < ω_2 ⇔ω_1-1/ω_1 < [G^x]x̅1/[G^x]x̅2 < ω_2-1/ω_2 Thus, we can apply Lemma <ref> for the last three conjuncts (for Ω_1 = ω_1-1/ω_1, Ω_2 = ω_2-1/ω_2), getting that the probability of the first conjunction is equal to: (1 - F_xx1(0)) · F_xx2(0) ·∫_0^+∞(∫_Ω_1 u_2^Ω_2 u_2 f_xx̅1(u_1) du_1 ) f_xx̅2(u_2)/u_2 du_2 Working analogously for the second disjunct, and summing the resulting probability with the one above, we get the result. For G^xxω_1ω_2, we work analogously, applying the second bullet of Corollary <ref> as above. For G^xx, we observe that if G^xx is true, then G^x is degenerate, which has probability 0. <Ref> The proof is direct from Proposition <ref> (and the respective Table <ref>), combined with the fact that the different cases in the disjunction are mutually exclusive, so we can use the restricted disjunction formula of (<ref>) in Subsection <ref>. <Ref>] The proof is direct from Proposition <ref> (and the respective Table <ref>), combined with the fact that the different cases in the disjunction are mutually exclusive, so we can use the restricted disjunction formula of (<ref>) in Subsection <ref>. §.§ Effect of modifying tolerance () <Ref> We first observe that, for any x∈{r,c} and any a,b,c such that: 0 ≤ a ≤ b ≤ c ≤ 1, we have that: G^xxac = G^xxab + G^xxbcROM1 G^xxac = 0 ⇔ a = c ROM2 G^xxac = G^xxab + G^xxbcRPM1 G^xxac = 0 ⇔ a = c RPM2 From Theorem <ref>, and for i = 1,2: mG: ε_i -misinformed = P_r,i· P_c,i, where P_r,i, P_c,i are determined by the second column of Table <ref> for the respective ε_i. Similarly, from Theorem <ref>, and for i = 1,2: mG: inverse-ε_i -misinformed = P_r,i' · P_c,i', where P_r,i', P_c,i' are determined by the third column of Table <ref> for the respective ε_i. Now, let us focus on the first bullet of the proposition. By Tables <ref>, <ref>, <ref>, it is easy to conclude that, for any x ∈{r,c}, i ∈{1,2}, the computation of P_x,i, P_x,i' is not affected by the value of ε_i, and, thus: P_x,1 = P_x,2, P_x,1' = P_x,2' for x∈{r,c}, which shows the result. Now, let us focus on the second bullet, and let us consider P_r,i, P_r,i' first. Set: ω_1,1 = max{0,p^0-ε_1}, ω_2,1 = min{1,p^0+ε_1}, ω_1,2 = max{0,p^0-ε_2}, ω_2,2 = min{1,p^0+ε_2} Since 0 ≤ε_1 < ε_2, we get that: 0 ≤ω_1,2≤ω_1,1≤ω_2,1≤ω_2,2≤ 1. Moreover, since ε_1 < ε_2, it follows that: ω_1,1 = ω_1,2⇔ω_1,1 = ω_1,2 = 0 ⇔( [ p^0 ≤ε_1; and; p^0 ≤ε_2 ]) ⇔ p^0 ≤ε_1 Analogously: ω_2,1 = ω_2,2⇔ω_2,1 = ω_2,2 = 1 ⇔( [ 1 - p^0 ≤ε_1; and; 1 - p^0 ≤ε_2 ]) ⇔ 1 - p^0 ≤ε_1 Using the order among ω_i,j, and by applying (ROM1) twice, we get that: G^rrω_1,2ω_2,2 = G^rrω_1,2ω_1,1 + G^rrω_1,1ω_2,1 + G^rrω_2,1ω_2,2 Now given the fact that probabilities are non-negative, and (ROM2), we have: G^rrω_1,1ω_2,1≤G^rrω_1,2ω_2,2 and: G^rrω_1,1ω_2,1 = G^rrω_1,2ω_2,2⇔ ( ω_1,1 = ω_1,2 and ω_2,1 = ω_2,2) Using analogous reasoning we get: G^rrω_1,1ω_2,1≤G^rrω_1,2ω_2,2 and: G^rrω_1,1ω_2,1 = G^rrω_1,2ω_2,2⇔ ( ω_1,1 = ω_1,2 and ω_2,1 = ω_2,2) Using the above, and Tables <ref>, <ref>, <ref>, we can easily conclude that P_r,1≤ P_r,2 and P_r,1' ≤ P_r,2'. Moreover: P_r,1 = P_r,2⇔( [ ω_1,1 = ω_1,2; and; ω_2,1 = ω_2,2 ]) ⇔( [ p^0 ≤ε_1; and; 1-p^0 ≤ε_1 ]) Analogously: P_r,1' = P_r,2' ⇔( [ p^0 ≤ε_1; and; 1 - p^0 ≤ε_1 ]) Reasoning analogously for the case of P_c,i, P_c,i', we get: For P_c,1≤ P_c,2 : P_c,1 = P_c,2⇔( [ q^0 ≤ε_1; and; 1 - q^0 ≤ε_1 ]) For P_c,1' ≤ P_c,2' : P_c,1' = P_c,2' ⇔( [ q^0 ≤ε_1; and; 1 - q^0 ≤ε_1 ]) By the hypothesis of the second bullet with regards to NE(G^0), Tables <ref>, <ref>, <ref>, and the above relations, the cases (2a), (2b) of the Theorem follow easily. Now let us focus on the third bullet. First, we observe that, by Table <ref>, the result is obvious for the case of ε-misinformed, so let us focus on the case of inverse-ε-misinformed. If ε_2 ≤ 0.5, then ε_1 ≤ 0.5, so the result is again obvious by Table <ref>. So let us focus on the scenario where ε_2 > 0.5. To show the result for this case, we use an approach similar to the one employed for the second bullet. In particular, we consider P_r,i' first. Set: ω_1,1' = max{0,1-ε_1}, ω_2,1' = min{1,ε_1}, ω_1,2' = max{0,1-ε_2}, ω_2,2' = min{1,ε_2} Using an analogous procedure (as in the second bullet), and the fact that 0 ≤ε_1 < ε_2, we conclude that: 0 ≤ω_1,2' ≤ω_1,1' ≤ω_2,1' ≤ω_2,2' ≤ 1 ω_1,1' = ω_1,2' ⇔ε_1 ≤ 1 ω_2,1' = ω_2,2' ⇔ε_1 ≤ 1 Also, using (RPM1), (RPM2), and the fact that probabilities are non-negative, we get, as in the second bullet: G^rrω_1,1'ω_2,1'≤G^rrω_1,2'ω_2,2' G^rrω_1,1'ω_2,1' = G^rrω_1,2'ω_2,2' ⇔(ω_1,1' = ω_1,2' and ω_2,1' = ω_2,2' ) Therefore, given that ε_1 > ε_2 > 0.5: (P_r,1' ≤ P_r,2' and P_r,1' = P_r,2' ) ⇔ε_1 ≤ 1 Working analogously for P_c,i', we get: ( P_c,1' ≤ P_c,2' and P_c,1' = P_c,2' ) ⇔ε_1 ≤ 1 By the hypothesis of the second bullet with regards to NE(G^0), Tables <ref>, <ref>, <ref>, and the above relations, the remaining subcases of (3a), (3b) of the Theorem follow easily. §.§ Effect of changing the game () and the mean () <Ref> From Table <ref>, we observe that, for the given x, and for any i ∈{1,2}: μ_xri = (P^0_r[1,i] + M^x_r[1,i]) - (P^0_r[2,i] + M^x_r[2,i]) = (P^0_r[1,i] + M^x_r[1,i] + a) - (P^0_r[2,i] + M^x_r[2,i] + a) = μ_xri. Analogously, we can show that μ_xci = μ_xci for any i ∈{1,2}. Also, it is clear that _xyi = _xyi for any y ∈{r,c}, i ∈{1,2}. Combining these two facts, the results are obvious. <Ref> Take any x∈{r,c}. Set b_x = - a_G - a_x. We observe that G^0 + M^x = G^0 + M^x + b_x. Thus, by Proposition <ref>, we get, for x ∈{r,c}: * For any i ∈{1,2}, G^xxi = G^xxi * For any 0 ≤ω_1 ≤ω_2 ≤ 1, G^xxω_1ω_2 = G^xxω_1ω_2 * For any 0 ≤ω_1 ≤ω_2 ≤ 1, G^xxω_1ω_2 = G^xxω_1ω_2 In addition, game theoretic results tell us that NE(G^0) = NE(G^0). Combining the above with Theorems <ref>, <ref> and Table <ref>, the result follows directly. §.§ Effect of modifying noise intensity () <Ref> Consider the family of random variables y x i for mG. By the definition of M, and by Table <ref>, it follows that, for any x,y ∈{r,c}, and for any i ∈{1,2}, it holds that: μ_ y x i = λμ_ y x i and _ y x i = λ^2 _ y x i Therefore yxi = λ yxi. Using the latter relationship, we get, for any x,y ∈{r,c}, i∈{1,2}: F_ y x i(0) = y x i≤ 0 = λ· y x i ≤ 0 = y x i ≤ 0 = F_ y x i(0) To simplify the equations in the following, let us set, for any x ∈{r,c}, i ∈{1,2}: U_i = xx̅1, U_i = xx̅1, and let f_i, f_i the respective cdfs for U_i, U_i. Then, using Lemma <ref>, and the above notation, for any ω_1,ω_2 ∈ℝ such that 0 ≤ω_1 < ω_2 ≤ 1, it holds that: ∫_0^+∞(∫_ω_1 - 1/ω_1 u_2^ω_2 - 1/ω_2 u_2 f_xx̅1(u_1) du_1 ) f_xx̅2(u_2)/u_2 du_2 = ∫_0^+∞(∫_ω_1 - 1/ω_1 u_2^ω_2 - 1/ω_2 u_2f_1(u_1) du_1 ) f_2(u_2)/u_2 du_2 = ω_1-1/ω_1≤U_1 /U_2 ≤ω_1-1/ω_1, U_1 < 0, U_2 > 0 = ω_1-1/ω_1≤λ· U_1/λ· U_2≤ω_1-1/ω_1, λ· U_1 < 0, λ· U_2 > 0 = ω_1-1/ω_1≤U_1/U_2≤ω_1-1/ω_1, U_1 < 0, U_2 > 0 = ∫_0^+∞(∫_ω_1 - 1/ω_1 u_2^ω_2 - 1/ω_2 u_2 f_1(u_1) du_1 ) f_2(u_2)/u_2 du_2 = ∫_0^+∞(∫_ω_1 - 1/ω_1 u_2^ω_2 - 1/ω_2 u_2 f_xx̅1(u_1) du_1 ) f_xx̅2(u_2)/u_2 du_2 Analogously, it can be shown that: ∫_-∞^0(∫_ω_1 - 1/ω_1 u_2^ω_2 - 1/ω_2 u_2 f_xx̅1(u_1) du_1 ) f_xx̅2(u_2)/u_2 du_2 = ∫_-∞^0(∫_ω_1 - 1/ω_1 u_2^ω_2 - 1/ω_2 u_2 f_xx̅1(u_1) du_1 ) f_xx̅2(u_2)/u_2 du_2 From the above equations, it is obvious that the respective probabilities in Table <ref> for mG and mG are equal. Moreover, since G^0 = λ· G^0, it follows that the Nash equilibria of G^0 and G^0 are the same. Combining these facts with Table <ref> and Theorems <ref>, <ref>, the result follows. plain
http://arxiv.org/abs/2307.02033v1
20230705053025
Multipole tidal effects in the gravitational-wave phases of compact binary coalescences
[ "Tatsuya Narikawa" ]
gr-qc
[ "gr-qc" ]
[email protected] ^1Institute for Cosmic Ray Research, The University of Tokyo, Chiba 277-8582, Japan We present the component form of the multipole tidal phase for the gravitational waveform of compact binary coalescences (MultipoleTidal), which consists of the mass quadrupole, the current quadrupole, and the mass octupole moments. We demonstrate the phase evolution and the phase difference between the tidal multipole moments (MultipoleTidal) and the mass quadrupole (PNTidal) as well as the numerical-relativity calibrated model (NRTidalv2). We find the MultipoleTidal gives a larger phase shift than the PNTidal, and is closer to the NRTidalv2. We compute the matches between waveform models to see the impact of the tidal multipole moments on the gravitational wave phases. We find the MultipoleTidal gives larger matches to the NRTidalv2 than the PNTidal, in particular, for high masses and large tidal deformabilities. We also apply the MultipoleTidal model to binary neutron star coalescence events GW170817 and GW190425. We find that the current quadrupole and the mass octupole moments give no significant impact on the inferred tidal deformability. Multipole tidal effects in the gravitational-wave phases of compact binary coalescences Tatsuya Narikawa August 1, 2023 ======================================================================================= § INTRODUCTION The post-Newtonian (PN) tidal waveform of compact binary coalescences for the mass quadrupole have been derived up to 2.5PN (relative 7.5PN) order for phase <cit.> and have used in analyses of binary neutron star (BNS) coalescence signals detected by the Advanced LIGO and Advanced Virgo detectors, GW170817 and/or GW190425 <cit.> (see also Refs. <cit.>). The complete and correct PN tidal phase up to 7.5PN order for the mass quadrupole, the current quadrupole, and the mass octupole interactions have been derived using the PN-matched multiplolar-post-Minkowskian formalism <cit.>. In our previous study <cit.>, we rewrote the complete and correct HFB (the abbreviation for Henry, Faye, Blanchet) form to the component form for the mass quadrupole interactions (hereafter PNTidal) as a function of the dimensionless tidal deformability for the individual stars, Λ, in a convenient way for data analyses. And we have reanalyzed the data around low-mass events identified as binary black holes by using the corrected version of the PNTidal model to test the exotic compact object hypothesis. The PNTidal model has also been used in analyses of BNS signals <cit.> (see also Ref. <cit.>). In Ref. <cit.>, the impact of the updated equation-of-state (EOS) insensitive relations for multipole tidal deformabilities and for f-mode dynamical tidal correction and the mass quadrupole in the context of BNS signals has been studied. In this study, we present the component form of the multipole tidal phases (hereafter MultipoleTidal), which consists of the mass quadrupole, the current quadrupole, and the mass octupole moments. By using the component form, we compare the match between the MultipoleTidal model and the NRTidalv2 model, which is the numerical relativity (NR) calibrated model for the tidal part, and the match between the PNTidal model and the NRTidalv2 model. We find that the match between the MultipoleTidal model and the NRTidalv2 model is better than the match between the PNTidal model and the NRTidalv2 model, in particular, for high mass and large tidal deformabilities. We also apply the MultipoleTidal model to BNS coalescence events GW170817 and GW190425 to investigate the impact of the model on the parameter estimation of the current events. The outline of the paper is as follows. In Section <ref>, we present the component form of the multipole tidal phases, MultipoleTidal, which consists of the mass quadrupole, the current quadrupole, and the mass octupole moments. In Section <ref>, we demonstrate the impact of the multipole tidal contributions by comparing the MultipoleTidal model with the PNTidal model and the NRTidalv2 model. In Section <ref>, we present the phase evolution for the the MultipoleTidal model, the PNTidal model, and several NR calibrated tidal models, and the phase difference between the MultipleTidal and the PNTidal. In Section <ref>, we compare the match between the MultipoleTidal and the NRTidalv2 and the match between the PNTidal and the NRTidalv2. In Section <ref>, we show the parameter estimation results of BNS coalescence events GW170817 and GW190425 with the MultipoleTidal. Section <ref> is devoted to a conclusion. In Appendix <ref>, we summarize the EOS-insensitive relations for the tidal multipole moments. In Appendix <ref>, we present two-dimensional posterior distributions of source parameters for GW170817 and GW190425 obtained with three different tidal waveform models: MultipoleTidal, PNTidal, and NRTidalv2. § WAVEFORM MODELS FOR INSPIRALING BINARY NEUTRON STARS In this section, we show multipole tidal effects in the GW phases . First, we briefly introduce the HFB form derived in Ref. <cit.>. Then, we present the component form. Finally, we present the identical-NS form, which focus on dominant contributions and is useful to fit NR waveforms. §.§ Multipole tidal interactions The tidal polarizability coefficients are denoted as <cit.> Gμ_A^(2)≡( G m_A/c^2)^5 Λ_A = 2/3 k_A^(2) R_A^5, for the mass quadrupole moment, Gσ_A^(2)≡( G m_A/c^2)^5 Σ_A = 1/48 j_A^(2) R_A^5, for the current quadrupole moment, and Gμ_A^(3)≡( G m_A/c^2)^7 Λ_A^(3) = 2/15 k_A^(3) R_A^7, for the mass octupole moment, where k_A^(2), j_A^(2), and k_A^(3) are Love numbers and Λ_A, Σ_A, and Λ_A^(3) are dimensionless multipole tidal deformability parameters, of each object with a component mass m_A and a radius R_A. §.§ HFB form The complete and correct form up to 2.5PN (relative 7.5PN) order for the mass quadrupole, the current quadrupole, and the mass octupole contributions to the GW tidal phases have been derived <cit.>. In this subsection, we briefly introduce the HFB form. The polarizability parameters are redefined for convenience as μ_±^(l) = 1/2( m_B/m_Aμ_A^(l)±m_A/m_Bμ_B^(l)), for the mass multipole moments, and σ_±^(l) = 1/2( m_B/m_Aσ_A^(l)±m_A/m_Bσ_B^(l)), for the current multipole moments, where l is a positive integer. For two identical NS (NS is the abbreviation of a neutron star), μ_+^(l) = μ_A^(l) = μ_B^(l) and μ_-^(l) = 0 and σ_+^(l) = σ_A^(l) = σ_B^(l) and σ_-^(l) = 0. The adimensionalized parameters corresponding to Eqs. (<ref>) and (<ref>) are defined as μ̃_±^(l) = ( c^2/G M)^2l+1 G μ_±^(l), and σ̃_±^(l) = ( c^2/G M)^2l+1 G σ_±^(l), respectively, where M = m_A + m_B is the total mass. The HFB-form for the tidal phases is derived as Ψ_HFB(f) = - 9/16 η^2 x^5/2{ (1 + 22η) μ̃_+^(2) + Δμ̃_-^(2). + [ ( 195/112 + 1595/28η +325/84η^2 ) μ̃_+^(2) + Δ( 195/112 + 4415/336η) μ̃_-^(2) + ( -5/126 + 1730/21η) σ̃_+^(2) - 5/126Δσ̃_-^(2)] x - π[ (1 + 22η) μ̃_+^(2) + Δμ̃_-^(2)] x^3/2 + [ ( 136190135/27433728 + 975167945/4572288η - 281935/6048η^2 +5/3η^3 ) μ̃_+^(2) + Δ( 136190135/27433728 + 211985/2592η + 1585/1296η^2 ) μ̃_-^(2). . + ( - 745/4536 + 1933490/5103η - 3770/81η^2 ) σ̃_+^(2) + Δ( - 745/4536 + 19355/243η) σ̃_-^(2) + 1000/27ημ̃_+^(3)] x^2 . + π[ ( - 397/112- 5343/56η + 1315/42η^2 ) μ̃_+^(2) + Δ( - 397/112 - 6721/336η) μ̃_-^(2) + ( 2/21 - 8312/63η) σ̃_+^(2) + 2/21Δσ̃_-^(2)] x^5/2}, where x=[π G M (1+z) f / c^3]^2/3 is the dimensionless PN expansion parameter, z is the source redshift, η=m_1 m_2 / (m_1+m_2)^2 is the symmetric mass ratio, and Δ = (m_A - m_B) / M is the normalized mass difference. §.§ Component form In our previous study <cit.>, we rewrote the HFB form to the component form for the mass quadrupole interactions as a function of the dimensionless tidal deformability for the component stars, Λ_A,B, which is called the PNTidal model. In this study, we extend the PNTidal to the multipole moments, which consists of the mass quadrupole, the current quadrupole, and the mass octupole contributions, which is called the MultipoleTidal model. By using X_A,B = m_A,B / M, the adimensionalized parameters Eqs. (<ref>) and (<ref>) are rewritten as 2μ̃_±^(2) = (1-X_A) X_A^4 Λ_A ± (A ↔ B), for the mass quadrupole moment, 2σ̃_±^(2) = (1-X_A) X_A^4 Σ_A ± (A ↔ B), for the current quadrupole moment, and 2μ̃_±^(3) = (1-X_A) X_A^4 Λ_A^(3)± (A ↔ B), for the mass octupole moment. Also, X_B and Δ are rewritten as X_B=1-X_A and Δ = 2 X_A - 1 by using X_A. §.§.§ Mass quadrupole The leading order effect for the mass quadrupole interactions appears relative 5PN order on GW phases <cit.>. The PN tidal phases for the mass quadrupole have been derived up to 2.5PN (relative 7.5PN) order <cit.>, and completed and corrected up to relative 7.5PN order by Ref. <cit.> as shown in Eq. (<ref>). The component form for the mass quadrupole interactions (PNTidal) is written as a function of Λ_A,B as <cit.> Ψ_Component^MassQuad(f) = 3/128η x^5/2Λ_A X_A^4 [ -24(12-11 X_A) - 5/28 (3179-919 X_A - 2286 X_A^2 + 260 X_A^3 ) x . +24 π (12 - 11 X_A) x^3/2 -5 ( 193986935/571536 - 14415613/381024 X_A - 57859/378 X_A^2 - 209495/1512 X_A^3 + 965/54 X_A^4 - 4 X_A^5 ) x^2 . + π/28 (27719 - 22415 X_A + 7598 X_A^2 - 10520 X_A^3) x^5/2] + (A ↔ B). This is simply written in terms of Λ_A,B, which is measurable via GWs. We have implemented it in the waveform section, LALSimulation <cit.>, as part of the LSC Algorithm Library (LAL) <cit.> and used it in data analyses <cit.>. §.§.§ Current quadrupole The leading order effect for the current quadrupole interactions appears relative 6PN order on GW phases <cit.> (see also Refs. <cit.>). The terms have been completed up to relative 7.5PN order by Ref. <cit.> as shown in Eq. (<ref>). We rewrite the HFB form to the the component form for the current quadrupole interactions as a function of the dimensionless current quadrupole parameters for the individual stars, Σ_A,B, as Ψ_Component^CurrentQuad(f) = 3/128η x^5/2Σ_A X_A^4 [ - 20/21 (1037 - 1038 X_A ) x . - 5/1701( 1220287 - 761308 X_A - 270312 X_A^2 - 190008 X_A^3 ) x^2 . + 16/21π (2075 - 2078 X_A ) x^5/2] + (A ↔ B). As the mass quadrupole, this is simply written in terms of Σ_A,B, which is measurable via GWs. We implement it in the LALSimulation and use it in our data analyses. §.§.§ Mass octupole The leading order effect for the mass octupole interactions appears relative 7PN order on GW phases <cit.>. The terms have been completed up to relative 7.5PN by Ref. <cit.> as shown in Eq. (<ref>). We rewrite the HFB form to the component form for the mass octupole interactions as a function of the dimensionless mass octupole parameters for the individual stars, Λ^(3)_A,B, as Ψ_Component^MassOct(f) = 3/128η x^5/2Λ_A^(3) X_A^4 [ - 4000/9( 1 - X_A ) x^2 ] + (A ↔ B). As the mass quadrupole and the current quadrupole, this is simply written in terms of Λ_A,B^(3), which is measurable via GWs. We implement it in the LALSimulation and use it in our data analyses. §.§ Identical-NS form For realistic cases, the tidal contributions to the GW phase are dominated by the symmetric contribution in terms of the multipole tidal polarizability (e.g., <cit.>). Motivated by this fact, ignoring the asymmetric contributions, the identical-NS form is obtained. §.§.§ Mass quadrupole The identical-NS form for the mass quadrupole is obtained from Eq. (<ref>) by replacing X_A,B by 1/2 and Λ_A,B by Λ̃ as <cit.> Ψ_Identical-NS^MassQuad (f) = 3/128η( - 39/2Λ̃) x^5/2 ×[ 1 + 3115/1248 x - π x^3/2 + 379931975/44579808 x^2 - 2137/546π x^5/2]. The NR calibrated tidal waveform models:  <cit.>,  <cit.>, and  <cit.>, are constructed by extension of this form. The binary tidal deformability Λ̃ is obtained as follows. By rewriting the leading terms of Eq. (<ref>) as (the leading term of Eq. (<ref>)) / ( 3/128η x^5/2) = Λ_A X_A^4 [ -24(12-11 X_A) ] + (A ↔ B) = ( -39/2) ( -39/2)^-1 (-24) [ (12-11 X_A) X_A^4 Λ_A . . + (A ↔ B) ] = ( -39/2) 16/13[ (12-11 X_A) X_A^4 Λ_A + (A ↔ B) ], and comparing it with Eq. (<ref>), it is obtained as Λ̃ = 16/13[ ( 12 - 11X_A ) X_A^4 Λ_A + (A ↔ B) ]. Λ̃ is a mass-weighted linear combination of the component tidal parameters Λ_A,B. §.§.§ Current quadrupole As the mass quadrupole, the identical-NS form for the current quadrupole is obtained from Eq. (<ref>) by replacing X_A,B by 1/2 and Σ_A,B by Σ̃ as follows: Ψ_Identical-NS^CurrentQuad(f) = 3/128η( - 185/3Σ̃) x^5/2 ×[ x + 93538/20979 x^2 - 8/5π x^5/2]. The binary current quadrupole tidal deformability Σ̃ is obtained as follows. By rewriting the leading terms of Eq. (<ref>) as (the leading term of Eq. (<ref>)) / ( 3/128η x^5/2) = Σ_A X_A^4 [ -20/21 (1037 - 1038 X_A) ] + (A ↔ B) = ( - 185/3) ( - 185/3)^-1(-20/21) [ (1037-1038 X_A) . . × X_A^4 Σ_A + (A ↔ B) ] = - 185/34/259[ (1037-1038 X_A) X_A^4 Σ_A + (A ↔ B) ], and comparing it with Eq. (<ref>), it is obtained as Σ̃ = 4/259[ ( 1037 - 1038X_A ) X_A^4 Σ_A + (A ↔ B) ]. Σ̃ is a mass-weighted linear combination of the component Σ_A,B[Our definition of Σ̃ is different from Eq. (6) of <cit.> by a factor of 2.]. §.§.§ Mass octupole As the mass quadrupole and the current quadrupole, the identical-NS form for the mass octupole is obtained from Eq. (<ref>) by replacing X_A,B by 1/2 and Λ_A,B^(3) by Λ̃^(3) as Ψ_Identical-NS^MassOct(f) = 3/128η(- 250/9Λ̃^(3)) x^5/2[ x^2 ]. The binary mass octupole tidal deformability Λ̃^(3) is obtained as follows. By rewriting the leading terms of Eq. (<ref>) as (the leading term of Eq. (<ref>)) / ( 3/128η x^5/2) Λ_A^(3) X_A^4 [ -4000/9 (1- X_A) ] + (A ↔ B) = ( -250/9) ( -250/9)^-1( -4000/9) [ (1- X_A) X_A^4 Λ_A^(3). . + (A ↔ B) ] = ( -250/9) 16 [ (1 - X_A) X_A^4 Λ_A^(3) + (A ↔ B) ], and comparing it with Eq. (<ref>), it is obtained as Λ̃^(3) = 16 [ ( 1 - X_A ) X_A^4 Λ_A^(3) + (A ↔ B) ], which is the same definition as in Ref. <cit.>. Λ̃^(3) is a mass-weighted linear combination of the component Λ_A,B^(3). § COMPARING MULTIPOLETIDAL WITH PNTIDAL AND NRTIDALV2 In this section, we compare the MultipoleTidal with the PNTidal and the NRTidalv2 as a representative of the NR calibrated waveform model. The NRTidalv2 model is the upgrade of the NRTidal model. We show the phase evolution for the MultipoleTidal and other models, the phase difference between the MultipoleTidal and the PNTidal, the match between the MultipoleTidal and the NRTidalv2, the one between the PNTidal and the NRTidalv2, and the one between the MultipoleTidal and the PNTidal for the Einstein Telescope (ET) sensitivity. Finally, we investigate impact of the current quadrupole and the mass octupole on the parameter estimation of BNS coalescence events GW170817 and GW190425. §.§ Comparison of phase evolution with NR calibrated model In this subsection, we compare the phase evolution of the tidal multipole moments. Figure <ref> shows the phase evolution for the mass quadrupole Λ (PNTidal, Eq. (<ref>)), the current quadrupole Σ (Eq. (<ref>)), the mass octupole Λ^(3) (Eq. (<ref>)), and the combined tidal multipole contributions (MultipoleTidal) as a function of x. Here, we consider the equal mass BNS with m_A,B=1.35, Λ_A,B=300, Σ_A,B=3.1, and Λ_A,B^(3)=483. We use quasiuniversal fitting relations to obtain Σ and Λ^(3) from Λ <cit.>. The phase evolution for the PNTidal is very close to the one for the MultipoleTidal. It means that the mass quadrupole contribution dominates over the multipole tidal phases. The MultipoleTidal gives a larger phase shift than the PNTidal as x increases. Figure <ref> shows the absolute magnitude of the phase difference between the MultipoleTidal (Λ, Σ, Λ^(3)) and the PNTidal (Λ) as a function of x. Here, we consider the same equal mass BNS as used in Fig. <ref>. The phase difference between the MultipoleTidal and the PNTidal models is about 0.1 rad at 1000 Hz, which corresponds to x∼0.125. Figure <ref> shows the phase evolution for different tidal phase models normalized by the leading-order PNTidal at the relative 5PN as a function of x. We compare the PNTidal (Λ), the MultipoleTidal (Λ, Σ, Λ^(3)), and NRTidalv2 as a representative of the NR calibrated model for the same equal mass BNS as used in Fig. <ref>. The MultipoleTidal is closer to the NRTidalv2 than the PNTidal. The NRTidalv2 gives a larger phase shift than the MultipoleTidal as x increases. The phase difference between the MultipoleTidal and the NRTidalv2 becomes larger as x becomes larger up to x∼0.16. For comparison, the KyotoTidal and NRTidal are also plotted. Here, the KyotoTidal, the NRTidal, and NRTidalv2 models are NR calibrated tidal models (the family of waveforms with tidal interactions is summarized in, e.g., Refs. <cit.>). §.§ Match computations with NR calibrated model To investigate how the MultipoleTidal model differs from the PNTidal model, we compare the MultipoleTidal model, the PNTidal model, and the NRTidalv2 model by computing the match between them. The match between waveform models is defined as the inner product between waveform models maximized over the phase and the time as F = ϕ_c,t_cmax(h_1(ϕ_c,t_c) | h_2)/√((h_1|h_1)(h_2|h_2)), where ϕ_c, t_c are an arbitrary phase and time shift, and h_1,2 represents waveform model. Here, the noise-weighted inner product between waveform models is given by (h_1 | h_2) = 4 ℛ∫_f_low^f_high(h̃_1(f) | h̃_2(f))/S_n(f) df, where tildes denote the Fourier transform, S_n(f) is the detector noise spectral density. We use the ET noise curve of Refs. <cit.> for match computation with f_low=10 Hz, since third generation GW detectors such as ET will be sensitive in the high-frequency regime above a few hundred Hz, which is suitable for observing BNS coalescence signals <cit.>. To restrict to the inspiral regime, we set the upper frequency cutoff f_high=1024 Hz. We use the open-source PyCBC toolkit <cit.> to compute the match for given waveform models implemented in the LALSimulation and a set of parameters. To focus on the tidal phase difference, we use the same point-particle baseline, the TF2 model up to 3.5PN order for the phase, and neglect tidal amplitude contributions for all tidal models used in this study. Here, TF2 is the abbreviation of TaylorF2, which is the PN waveform model for a point-particle part <cit.> (very recently, the formula for the phase is updated to 4.5PN order <cit.>). We newly implement the TF2_MultipoleTidal and the TF2_NRTidalv2 in the LALSimulation. We use quasiuniversal fitting relations between the tidal multipole moments to obtain Σ and Λ^(3) from Λ <cit.> (see Appendix <ref> for details). We select 2000 samples for BNS coalescence signals with the uniform distributions in m_A∈ [1, 3] M_⊙, m_B ∈ [1, 3] M_⊙, and Λ_A,B∈ [20, 3000], imposing the mass ratio q=m_B/m_A ≤ 1. Here, we consider nonspinning binaries. Fig. <ref> shows the match between the TF2_MultipoleTidal and the TF2_NRTidalv2 (left panel), the one between the TF2_PNTidal and the TF2_NRTidalv2 (middle panel), and the one between the TF2_MultipoleTidal and the TF2_PNTidal (right panel) on ℳ-Λ̃ plane. Here, ℳ:=(m_1 m_2)^3/5/(m_1 + m_2)^1/5 is the chirp mass, which gives the leading-order evolution of the binary amplitude and phase. For high chirp mass ℳ and large binary tidal deformability Λ̃, the values of the match between the TF2_MultipoleTidal and the TF2_PNTidal models are small, corresponding to upper right side in the right panel. The match between the TF2_MultipoleTidal and TF2_NRTidalv2 (left panel) is better than the one between the TF2_PNTidal and the TF2_NRTidalv2 (middle panel) for high ℳ and large Λ̃, where the MultipoleTidal is effectively improved by the current quadrupole Σ and the mass octupole Λ^(3) contributions. §.§ Analysis of GW170817 and GW190425 To see the impact of the multipole tidal effects on parameter estimation, we apply the MultipoleTidal model to BNS coalescence events GW170817 and GW190425 using the parameter estimation software, LALInference <cit.>. We analyze 128 seconds data around coalescence time and frequency range between 23 and 1000 Hz for GW170817 and between 19.4 and 1000 Hz for GW190425. We compare three tidal phase models: the MultipoleTidal model (combination of Eq. (<ref>) for the mass quadrupole Λ, Eq. (<ref>) for the current quadrupole Σ, and Eq. (<ref>) for the mass octupole Λ^(3)), the PNTidal model (Eq. (<ref>) for the mass quadrupole Λ), and the NRTidalv2 model for the NR calibrated effects. Here, we do not consider the tidal amplitude contribution for all tidal models to focus on the difference in the phases. We use the TF2 model as the inspiral point-particle baseline for all tidal models. We assume that the spins of component objects are aligned with the orbital angular momentum and incorporate the 3.5PN order formula in couplings between the orbital angular momentum and the component spins <cit.>, 3PN order formula in point-mass spin-spin, and self-spin interactions <cit.>. We are basically following our previous analyses of GW170817 and GW190425 <cit.>, except for the waveform models used. We compute posterior probability distribution functions (PDFs) and Bayes factor (BF) using the nested sampling algorithm <cit.> available in the LALInference package <cit.> as part of the LAL <cit.>. The data are obtained on the Gravitational Wave Open Science Center (https://www.gw-openscience.org) released by the LIGO-Virgo-KAGRA (LVK) Collaborations <cit.>. The noise spectrum densities estimated with the BayesLine algorithm <cit.> are also obtained from there. We employ a uniform prior on the detector-frame component masses m_1,2^det in the range [0.5, 5.0]M_⊙, the spin magnitudes χ_1z,2z in the range [-0.05, 0.05], the mass quadrupole tidal deformabilities Λ_1 in the range [0, 5000] and Λ_2 in the range [0, 10000]. To obtain Σ and Λ^(3) from Λ, we use quasiuniversal fitting relations between the tidal multipole moments <cit.> (see Appendix <ref> for details). In our analyses, we marginalized over the coalescence time t_c and the phase at the coalescence time ϕ_c semianalytically. For GW170817, we fix the sky location to the position of AT 2017gfo, which is an electromagnetic counterpart of GW170817 <cit.>. Figure <ref> shows the marginalized posterior PDFs of the binary tidal deformability Λ̃ for GW170817 (left) and GW190425 (right) using TF2_MultipoleTidal, TF2_PNTidal, and TF2_NRTidalv2 models. The corresponding 90% credible intervals (highest-probability-density (HPD)), , logarithmic Bayes factor of the signal hypothesis against the noise assumption for each model, and logarithmic Bayes factors between three tidal phase models and the TF2_MultipoleTidal model are summarized in Table <ref>. Here, to effectively obtain a uniform prior on Λ̃, we weight the posterior of Λ̃ by dividing by the prior, similarly to the LVC's analyses <cit.>. The difference between the inferred Λ̃ by TF2_MultipoleTidal and TF2_PNTidal is very small compared with 90% statistical error. The impact of the additional tidal effects in the MultipoleTidal models is not significant on the estimates of Λ̃, which is consistent with the results of Ref. <cit.>. A closer look reveals that the TF2_NRTidalv2 gives a smallest median value and narrowest 90% credible interval for the inferred Λ̃. The TF2_MultipoleTidal gives smaller median value and narrower 90% credible interval for the inferred Λ̃ than the TF2_PNTidal, and is closer distribution to the TF2_NRTidalv2 than the TF2_PNTidal. These results are expected by the tidal phase shift shown in Fig. <ref>. The logarithmic Bayes factor of the TF2_PNTidal and the TF2_NRTidalv2 relative to the TF2_MultipoleTidal, logBF^X_TF2_MultipoleTidal, where X=TF2_PNTidal or TF2_NRTidalv2, indicate no preference among the three tidal phase models by relying on BNS coalescences GW170817 and GW190425 as shown in Table <ref>. § CONCLUSION PN GW phases of compact binary coalescences for the tidal multipole moments, which consist of the mass quadrupole Λ, the current quadrupole Σ, and the mass octupole moments Λ^(3), have been completed up to 5+2.5PN order in Ref. <cit.>. We rewrite the original form to the component form as a function of the component multipole tidal deformabilities, which is convenient for data analysis. To see the impact of the current quadrupole and mass octupole moments, we compare the MultipoleTidal with the PNTidal as well as NRTidalv2 by computing the phase evolution, the phase difference, the matches between waveform models, and applying parameter estimation for BNS coalescence events GW170817 and GW190425. First, comparing the phase evolution for different tidal waveform models shows that the MultipoleTidal gives a larger phase shift than the PNTidal, and is closer to the NRTidalv2. The phase difference between the MultipoleTidal and the PNTidal is about 0.1 rad at 1000 Hz. Second, comparing matches between different waveform models provides that the MultipoleTidal is closer to the NRTidalv2 than the PNTidal, in particular, for high masses and large tidal deformabilities. Finally, we compare parameter estimation results from different waveform models for BNS coalescence events GW170817 and GW190425. We find that the difference between the inferred binary tidal deformability Λ̃ by TF2_MultipoleTidal and TF2_PNTidal is very small. It means that the additional current quadrupole and mass octupole moments give no significant systematic difference in the inferred Λ̃ compared to the mass quadrupole moment. These results are consistent with the phase evolution and the results of Ref. <cit.>. The estimated logarithmic Bayes factors between tidal waveform models show no preference among the three tidal phase models for GW170817 and GW190425. During the fourth observing run 4 (O4), it is expected that tens of BNS coalescence signals will be detected, which provide rich information on the sources <cit.>. As the number of detected BNS coalescence events increases, the systematic differences among different tidal waveform models will be noticeable and constraints on EOS models for NSs will be improved as shown in Refs. <cit.>. Therefore, coming BNS coalescence data would reveal the need for expansions from PN tidal models. The component form and the identical-NS form for the tidal multipole moments derived in this paper can be used to extend NR calibrated models including multipole tidal interactions. Through tidal multipole moments, additional information on EOS for NSs will be obtained. § ACKNOWLEDGMENT We would like to thank Nami Uchikata, Kyohei Kawaguchi, and Hideyuki Tagoshi for fruitful discussions and useful comments on the study. This work is supported by JSPS KAKENHI Grants No. JP21K03548. The analyses in this paper were run on the VELA cluster in the ICRR. This research has made use of data or software obtained from the Gravitational Wave Open Science Center (gw-openscience.org), a service of LIGO Laboratory, the LIGO Scientific Collaboration, the Virgo Collaboration, and KAGRA. LIGO Laboratory and Advanced LIGO are funded by the United States National Science Foundation (NSF) as well as the Science and Technology Facilities Council (STFC) of the United Kingdom, the Max-Planck-Society (MPS), and the State of Niedersachsen/Germany for support of the construction of Advanced LIGO and construction and operation of the GEO 600 detector. Additional support for Advanced LIGO was provided by the Australian Research Council. Virgo is funded, through the European Gravitational Observatory (EGO), by the French Centre National de Recherche Scientifique (CNRS), the Italian Istituto Nazionale di Fisica Nucleare (INFN) and the Dutch Nikhef, with contributions by institutions from Belgium, Germany, Greece, Hungary, Ireland, Japan, Monaco, Poland, Portugal, Spain. The construction and operation of KAGRA are funded by Ministry of Education, Culture, Sports, Science and Technology (MEXT), and Japan Society for the Promotion of Science (JSPS), National Research Foundation (NRF) and Ministry of Science and ICT (MSIT) in Korea, Academia Sinica (AS) and the Ministry of Science and Technology (MoST) in Taiwan. § EOS-INSENSITIVE RELATIONS In this appendix, we summarize the EOS-insensitive relations for the tidal multipole moments, which are used to obtain Σ, and Λ^(3) from Λ. A fitting function (including only the realistic EoSs) of the form is constructed as Eq. (60) of Ref. <cit.> ln y_i = a_i + b_i ln x_i + c_i (ln x_i)^2 +d_i (ln x_i)^3 + e_i (ln x_i)^4, where the fitted coefficients are in Table I of Ref. <cit.> as shown in Table <ref>. By using the EOS-insensitive relations of the tidal multipole moments, Σ=3.11 and Λ^(3)=483.3 are obtained for Λ=300. § WAVEFORM SYSTEMATICS FOR THE DIFFERENT TIDAL EFFECTS IN THE SOURCE PROPERTIES In this appendix, we present estimates of source parameters for completeness obtained by using three different tidal models: the MultipoleTidal, PNTidal, and NRTidalv2 models, employing the same point-particle baseline model TF2 for |χ_1z,2z| ≤ 0.05 and the upper frequency cutoff f_high=1000 Hz. We demonstrate that the inferred marginalized masses and spins are not sensitive to the different tidal models, unlike Λ̃. Figures <ref> and <ref> show two-dimensional posterior PDFs of (ℳ, q, χ_eff, Λ̃) for GW170817 and GW190425, respectively. The estimates of source parameters presented show the absence of significant systematic difference associated with a difference among tidal part models: the MultipoleTidal, PNTidal, and NRTidalv2 models. 99 Damour:2012yf T. Damour, A. Nagar and L. Villain, “Measurability of the tidal polarizability of neutron stars in late-inspiral gravitational-wave signals”, Phys. Rev. D 85, 123007 (2012) [arXiv:1203.4352 [gr-qc]]. Bini:2012gu D. Bini, T. Damour and G. Faye, “Effective action approach to higher-order relativistic tidal interactions in binary systems and their effective one body description,” Phys. Rev. D 85, 124034 (2012) [arXiv:1202.3565 [gr-qc]]. Agathos:2015uaa M. Agathos, J. Meidam, W. Del Pozzo, T. G. F. Li, M. Tompitak, J. Veitch, S. Vitale and C. Van Den Broeck, “Constraining the neutron star equation of state with gravitational wave signals from coalescing binary neutron stars,” Phys. Rev. D 92, 023012 (2015) [arXiv:1503.05405 [gr-qc]]. Henry:2020ski Q. Henry, G. Faye and L. Blanchet, “Tidal effects in the gravitational-wave phase evolution of compact binary systems to next-to-next-to-leading post-Newtonian order,” Phys. Rev. D 102, 044033 (2020) [arXiv:2005.13367 [gr-qc]]. TheLIGOScientific:2017qsa B. P. Abbott et al. [LIGO Scientific and Virgo Collaborations], “GW170817: Observation of Gravitational Waves from a Binary Neutron Star Inspiral”, Phys. Rev. Lett. 119, 161101 (2017) [arXiv:1710.05832 [gr-qc]]. Abbott:2018wiz B. P. Abbott et al. [LIGO Scientific and Virgo Collaborations], “Properties of the binary neutron star merger GW170817,” Phys. Rev. X 9, 011001 (2019) [arXiv:1805.11579 [gr-qc]]. De:2018uhw S. De, D. Finstad, J. M. Lattimer, D. A. Brown, E. Berger and C. M. Biwer, “Tidal Deformabilities and Radii of Neutron Stars from the Observation of GW170817,” Phys. Rev. Lett. 121, 091102 (2018) Erratum: [Phys. Rev. Lett. 121, 259902 (2018)] [arXiv:1804.08583 [astro-ph.HE]]. LIGOScientific:2018mvr B. P. Abbott et al. [LIGO Scientific and Virgo], “GWTC-1: A Gravitational-Wave Transient Catalog of Compact Binary Mergers Observed by LIGO and Virgo during the First and Second Observing Runs,” Phys. Rev. X 9, 031040 (2019) [arXiv:1811.12907 [astro-ph.HE]]. Abbott:2020uma B. P. Abbott et al. [LIGO Scientific and Virgo Collaborations], “GW190425: Observation of a Compact Binary Coalescence with Total Mass ∼ 3.4 M_⊙,” Astrophys. J. Lett. 892, L3 (2020) [arXiv:2001.01761 [astro-ph.HE]]. LIGOScientific:2020ibl R. Abbott et al. [LIGO Scientific and Virgo], “GWTC-2: Compact Binary Coalescences Observed by LIGO and Virgo During the First Half of the Third Observing Run,” Phys. Rev. X 11, 021053 (2021) [arXiv:2010.14527 [gr-qc]]. Dai:2018dca L. Dai, T. Venumadhav and B. Zackay, “Parameter Estimation for GW170817 using Relative Binning,” [arXiv:1806.08793 [gr-qc]]. Narikawa:2018yzt T. Narikawa, N. Uchikata, K. Kawaguchi, K. Kiuchi, K. Kyutoku, M. Shibata and H. Tagoshi, “Discrepancy in tidal deformability of GW170817 between the Advanced LIGO twin detectors,” Phys. Rev. Research. 1, 033055 (2019) [arXiv:1812.06100 [astro-ph.HE]]. Narikawa:2019xng T. Narikawa, N. Uchikata, K. Kawaguchi, K. Kiuchi, K. Kyutoku, M. Shibata and H. Tagoshi, “Reanalysis of the binary neutron star mergers GW170817 and GW190425 using numerical-relativity calibrated waveform models,” Phys. Rev. Res. 2, 043039 (2020) [arXiv:1910.08971 [gr-qc]]. Gamba:2020wgg R. Gamba, M. Breschi, S. Bernuzzi, M. Agathos and A. Nagar, “Waveform systematics in the gravitational-wave inference of tidal parameters and equation of state from binary neutron star signals,” Phys. Rev. D 103, 124015 (2021) [arXiv:2009.08467 [gr-qc]]. Breschi:2021wzr M. Breschi, R. Gamba and S. Bernuzzi, “Bayesian inference of multimessenger astrophysical data: Methods and applications to gravitational waves,” Phys. Rev. D 104, 042001 (2021) [arXiv:2102.00017 [gr-qc]]. Narikawa:2022saj T. Narikawa and N. Uchikata, “Follow-up analyses of the binary-neutron-star signals GW170817 and GW190425 by using post-Newtonian waveform models,” Phys. Rev. D 106, 103006 (2022) [arXiv:2205.06023 [gr-qc]]. LIGOScientific:2018ehx B. P. Abbott et al. [LIGO Scientific and Virgo], “Constraining the p-Mode–g-Mode Tidal Instability with GW170817,” Phys. Rev. Lett. 122, 061104 (2019) [arXiv:1808.08676 [astro-ph.HE]]. Pratten:2019sed G. Pratten, P. Schmidt and T. Hinderer, “Gravitational-Wave Asteroseismology with Fundamental Modes from Compact Binary Inspirals,” Nature Commun. 11, 2553 (2020) [arXiv:1905.00817 [gr-qc]]. Pan:2020tht Z. Pan, Z. Lyu, B. Bonga, N. Ortiz and H. Yang, “Probing Crust Meltdown in Inspiraling Binary Neutron Stars,” Phys. Rev. Lett. 125, no.20, 201102 (2020) [arXiv:2003.03330 [astro-ph.HE]]. Pradhan:2022rxs B. K. Pradhan, A. Vijaykumar and D. Chatterjee, “Impact of updated multipole Love numbers and f-Love universal relations in the context of binary neutron stars,” Phys. Rev. D 107, 023010 (2023) [arXiv:2210.09425 [astro-ph.HE]]. Narikawa:2021pak T. Narikawa, N. Uchikata and T. Tanaka, “Gravitational-wave constraints on the GWTC-2 events by measuring the tidal deformability and the spin-induced quadrupole moment,” Phys. Rev. D 104, 084056 (2021) [arXiv:2106.09193 [gr-qc]]. Hinderer:2007mb T. Hinderer, “Tidal Love numbers of neutron stars,” Astrophys. J. 677, 1216 (2008) [arXiv:0711.2420 [astro-ph]]. Flanagan:2007ix E. E. Flanagan and T. Hinderer, “Constraining neutron star tidal Love numbers with gravitational wave detectors,” Phys. Rev. D 77, 021502 (2008) [arXiv:0709.1915 [astro-ph]]. Hinderer:2009ca T. Hinderer, B. D. Lackey, R. N. Lang and J. S. Read, “Tidal deformability of neutron stars with realistic equations of state and their gravitational wave signatures in binary inspiral,” Phys. Rev. D 81, 123016 (2010) [arXiv:0911.3535 [astro-ph.HE]]. Vines:2011ud J. Vines, E. E. Flanagan and T. Hinderer, “Post-1-Newtonian tidal effects in the gravitational waveform from binary inspirals,” Phys. Rev. D 83, 084051 (2011) [arXiv:1101.1673 [gr-qc]]. Veitch:2014wba J. Veitch et al., “Parameter estimation for compact binaries with ground-based gravitational-wave observations using the LALInference software library”, Phys. Rev. D 91, 042003 (2015) [arXiv:1409.7215 [gr-qc]]. LAL LIGO Scientific Collaboration, LIGO Algorithm Library - LALSuite, Free Software (GPL), 2018; . Abdelsalhin:2018reg T. Abdelsalhin, L. Gualtieri and P. Pani, “Post-Newtonian spin-tidal couplings for compact binaries,” Phys. Rev. D 98, 104046 (2018) [arXiv:1805.01487 [gr-qc]]. Banihashemi:2018xfb B. Banihashemi and J. Vines, “Gravitomagnetic tidal effects in gravitational waves from neutron star binaries,” Phys. Rev. D 101, 064003 (2020) [arXiv:1805.07266 [gr-qc]]. JimenezForteza:2018rwr X. Jiménez Forteza, T. Abdelsalhin, P. Pani and L. Gualtieri, “Impact of high-order tidal terms on binary neutron-star waveforms,” Phys. Rev. D 98, 124014 (2018) [arXiv:1807.08016 [gr-qc]]. Castro:2022mpw G. Castro, L. Gualtieri, A. Maselli and P. Pani, “Impact and detectability of spin-tidal couplings in neutron star inspirals,” Phys. Rev. D 106, 024011 (2022) [arXiv:2204.12510 [gr-qc]]. Landry:2018bil P. Landry, “Rotational-tidal phasing of the binary neutron star waveform,” [arXiv:1805.01882 [gr-qc]]. Favata:2013rwa M. Favata, “Systematic parameter errors in inspiraling neutron star binaries,” Phys. Rev. Lett. 112, 101101 (2014) [arXiv:1310.8288 [gr-qc]]. Wade:2014vqa L. Wade, J. D. E. Creighton, E. Ochsner, B. D. Lackey, B. F. Farr, T. B. Littenberg and V. Raymond, “Systematic and statistical errors in a bayesian approach to the estimation of the neutron-star equation of state using advanced gravitational wave detectors”, Phys. Rev. D 89, 103012 (2014) [arXiv:1402.5156 [gr-qc]]. Kawaguchi:2018gvj K. Kawaguchi, K. Kiuchi, K. Kyutoku, Y. Sekiguchi, M. Shibata and K. Taniguchi, “Frequency-domain gravitational waveform models for inspiraling binary neutron stars,” Phys. Rev. D 97, 044044 (2018) [arXiv:1802.06518 [gr-qc]]. Dietrich:2017aum T. Dietrich, S. Bernuzzi and W. Tichy, “Closed-form tidal approximants for binary neutron star gravitational waveforms constructed from high-resolution numerical relativity simulations”, Phys. Rev. D 96, 121501(R) (2017) [arXiv:1706.02969 [gr-qc]]. Dietrich:2018uni T. Dietrich, S. Khan, R. Dudi, S. J. Kapadia, P. Kumar, A. Nagar, F. Ohme, F. Pannarale, A. Samajdar and S. Bernuzzi, et al. “Matter imprints in waveform models for neutron star binaries: Tidal and self-spin effects,” Phys. Rev. D 99, 024029 (2019) [arXiv:1804.02235 [gr-qc]]. Dietrich:2019kaq T. Dietrich, A. Samajdar, S. Khan, N. K. Johnson-McDaniel, R. Dudi and W. Tichy, “Improving the NRTidal model for binary neutron star systems,” Phys. Rev. D 100, 044003 (2019) [arXiv:1905.06011 [gr-qc]]. Yagi:2013sva K. Yagi, “Multipole Love Relations,” Phys. Rev. D 89, 043011 (2014) [erratum: Phys. Rev. D 96, 129904 (2017); erratum: Phys. Rev. D 97, 129901 (2018)] [arXiv:1311.0872 [gr-qc]]. Dietrich:2020eud T. Dietrich, T. Hinderer and A. Samajdar, “Interpreting Binary Neutron Star Mergers: Describing the Binary Neutron Star Dynamics, Modelling Gravitational Waveforms, and Analyzing Detections,” Gen. Rel. Grav. 53, 27 (2021) [arXiv:2004.02527 [gr-qc]]. Isoyama:2020lls S. Isoyama, R. Sturani and H. Nakano, “Post-Newtonian templates for gravitational waves from compact binary inspirals,” in Handbook of Gravitational Wave Astronomy, edited by C. Bambi, S. Katsanevas, and K. D. Kokkotas (Springer, Singapore, 2021), pp. 1-49 [arXiv:2012.01350 [gr-qc]]. Punturo:2010zz M. Punturo, M. Abernathy, F. Acernese, B. Allen, N. Andersson, K. Arun, F. Barone, B. Barr, M. Barsuglia and M. Beker, et al. “The Einstein Telescope: A third-generation gravitational wave observatory,” Class. Quant. Grav. 27, 194002 (2010). P1600143 Einstein Telecope anticipated sensitivity curve, LIGO Document P1600143-v18, . Branchesi:2023mws M. Branchesi, M. Maggiore, D. Alonso, C. Badger, B. Banerjee, F. Beirnaert, E. Belgacem, S. Bhagwat, G. Boileau and S. Borhanian, et al. “Science with the Einstein Telescope: a comparison of different designs,” [arXiv:2303.15923 [gr-qc]]. Puecher:2023twf A. Puecher, A. Samajdar and T. Dietrich, “Measuring tidal effects with the Einstein Telescope: A design study,” [arXiv:2304.05349 [astro-ph.IM]]. PyCBC Nitz, A. H., Harry, I. W., Willis, J. L., et al. “PyCBC” Software, , GitHub. Usman:2015kfa S. A. Usman, A. H. Nitz, I. W. Harry, C. M. Biwer, D. A. Brown, M. Cabero, C. D. Capano, T. Dal Canton, T. Dent and S. Fairhurst, et al. “The PyCBC search for gravitational waves from compact binary coalescence,” Class. Quant. Grav. 33, 215004 (2016) [arXiv:1508.02357 [gr-qc]]. Dhurandhar:1992mw S. V. Dhurandhar and B. S. Sathyaprakash, “Choice of filters for the detection of gravitational waves from coalescing binaries. 2. Detection in colored noise,” Phys. Rev. D 49, 1707-1722 (1994) Buonanno:2009zt A. Buonanno, B. Iyer, E. Ochsner, Y. Pan and B. S. Sathyaprakash, “Comparison of post-Newtonian templates for compact binary inspiral signals in gravitational-wave detectors”, Phys. Rev. D 80, 084043 (2009) [arXiv:0907.0700 [gr-qc]]. Blanchet:2013haa L. Blanchet, Gravitational Radiation from Post-Newtonian Sources and Inspiralling Compact Binaries, Living Rev. Rel. 17, 2 (2014) [arXiv:1310.1528 [gr-qc]]. Blanchet:2023bwj L. Blanchet, G. Faye, Q. Henry, F. Larrouturou and D. Trestini, “Gravitational-Wave Phasing of Compact Binary Systems to the Fourth-and-a-Half post-Newtonian Order,” [arXiv:2304.11185 [gr-qc]]. Bohe:2013cla A. Bohe, S. Marsat and L. Blanchet, “Next-to-next-to-leading order spin-orbit effects in the gravitational wave flux and orbital phasing of compact binaries”, Class. Quant. Grav. 30, 135009 (2013) [arXiv:1303.7412 [gr-qc]]. Arun:2008kb K. G. Arun, A. Buonanno, G. Faye and E. Ochsner, “Higher-order spin effects in the amplitude and phase of gravitational waveforms emitted by inspiraling compact binaries: Ready-to-use gravitational waveforms”, Phys. Rev. D 79, 104023 (2009) Erratum: [Phys. Rev. D 84, 049901 (2011)] [arXiv:0810.5336 [gr-qc]]. Mikoczi:2005dn B. Mikoczi, M. Vasuth and L. A. Gergely, “Self-interaction spin effects in inspiralling compact binaries”, Phys. Rev. D 71, 124043 (2005) [astro-ph/0504538]. Skilling:2004 J. Skilling, "Nested Sampling," Bayesian Inference and Maximum Entropy Methods in Science and Engineering MAXENT 2004 (eds Fischer, R., Dose, V., Preuss, R. & von Toussaint, U.) 395 (AIP, 2004). Skilling:2006 J. Skilling, "Nested sampling for general Bayesian computation," Bayesian Analysis 1, 833 (2006). Ashton:2022grj G. Ashton, N. Bernstein, J. Buchner, X. Chen, G. Csányi, A. Fowlie, F. Feroz, M. Griffiths, W. Handley and M. Habeck, et al. “Nested sampling for physical scientists,” Nature 2, 39 (2022) [arXiv:2205.15570 [stat.CO]]. LIGOScientific:2019lzm R. Abbott et al. [LIGO Scientific and Virgo], “Open data from the first and second observing runs of Advanced LIGO and Advanced Virgo,” SoftwareX 13, 100658 (2021) [arXiv:1912.11716 [gr-qc]]. Cornish:2014kda N. J. Cornish and T. B. Littenberg, “BayesWave: Bayesian Inference for Gravitational Wave Bursts and Instrument Glitches,” Class. Quant. Grav. 32, no.13, 135012 (2015) [arXiv:1410.3835 [gr-qc]]. Littenberg:2015kpb T. B. Littenberg, J. B. Kanner, N. J. Cornish and M. Millhouse, “Enabling high confidence detections of gravitational-wave bursts,” Phys. Rev. D 94, no.4, 044050 (2016) [arXiv:1511.08752 [gr-qc]]. Chatziioannou:2019zvs K. Chatziioannou, C. J. Haster, T. B. Littenberg, W. M. Farr, S. Ghonge, M. Millhouse, J. A. Clark and N. Cornish, “Noise spectral estimation methods and their impact on gravitational wave measurement of compact binary mergers,” Phys. Rev. D 100, no.10, 104004 (2019) [arXiv:1907.06540 [gr-qc]]. Soares-Santos:2017lru DES and Dark Energy Camera GW-EM Collaborations: M. Soares-Santos, D. E. Holz, J. Annis, R. Chornock, K. Herner, E. Berger, D. Brout, H. Chen, R. Kessler, M. Sako et al., “The Electromagnetic Counterpart of the Binary Neutron Star Merger LIGO/Virgo GW170817. I. Discovery of the Optical Counterpart Using the Dark Energy Camera,” Astrophys. J. Lett. 848, L16 (2017) [arXiv:1710.05459 [astro-ph.HE]]. LIGOScientific:2017ync B. P. Abbott et al. [LIGO Scientific, Virgo, Fermi GBM, INTEGRAL, IceCube, AstroSat Cadmium Zinc Telluride Imager Team, IPN, Insight-Hxmt, ANTARES, Swift, AGILE Team, 1M2H Team, Dark Energy Camera GW-EM, DES, DLT40, GRAWITA, Fermi-LAT, ATCA, ASKAP, Las Cumbres Observatory Group, OzGrav, DWF (Deeper Wider Faster Program), AST3, CAASTRO, VINROUGE, MASTER, J-GEM, GROWTH, JAGWAR, CaltechNRAO, TTU-NRAO, NuSTAR, Pan-STARRS, MAXI Team, TZAC Consortium, KU, Nordic Optical Telescope, ePESSTO, GROND, Texas Tech University, SALT Group, TOROS, BOOTES, MWA, CALET, IKI-GW Follow-up, H.E.S.S., LOFAR, LWA, HAWC, Pierre Auger, ALMA, Euro VLBI Team, Pi of Sky, Chandra Team at McGill University, DFN, ATLAS Telescopes, High Time Resolution Universe Survey, RIMAS, RATIR and SKA South Africa/MeerKAT], “Multi-messenger Observations of a Binary Neutron Star Merger,” Astrophys. J. Lett. 848, L12 (2017) [arXiv:1710.05833 [astro-ph.HE]]. J-GEM:2017tyx Y. Utsumi et al. [J-GEM], “J-GEM observations of an electromagnetic counterpart to the neutron star merger GW170817,” Publ. Astron. Soc. Jap. 69, 101 (2017) [arXiv:1710.05848 [astro-ph.HE]]. KAGRA:2013rdx B. P. Abbott et al. [LIGO Scientific, Virgo and KAGRA], “Prospects for observing and localizing gravitational-wave transients with Advanced LIGO, Advanced Virgo and KAGRA,” Living Rev.Rel. 23, 3 (2020), Living Rev. Rel. 21, 3 (2018) [arXiv:1304.0670 [gr-qc]]. DelPozzo:2013ala W. Del Pozzo, T. G. F. Li, M. Agathos, C. Van Den Broeck and S. Vitale, “Demonstrating the feasibility of probing the neutron star equation of state with second-generation gravitational wave detectors,” Phys. Rev. Lett. 111, 071101 (2013) [arXiv:1307.8338 [gr-qc]]. Lackey:2014fwa B. D. Lackey and L. Wade, “Reconstructing the neutron-star equation of state with gravitational-wave detectors from a realistic population of inspiralling binary neutron stars,” Phys. Rev. D 91, 043002 (2015) [arXiv:1410.8866 [gr-qc]]. Wysocki:2020myz D. Wysocki, R. O'Shaughnessy, L. Wade and J. Lange, “Inferring the neutron star equation of state simultaneously with the population of merging neutron stars,” [arXiv:2001.01747 [gr-qc]]. Dudi:2018jzn R. Dudi, F. Pannarale, T. Dietrich, M. Hannam, S. Bernuzzi, F. Ohme and B. Brügmann, “Relevance of tidal effects and post-merger dynamics for binary neutron star parameter estimation,” Phys. Rev. D 98, 084061 (2018) [arXiv:1808.09749 [gr-qc]]. Samajdar:2018dcx A. Samajdar and T. Dietrich, “Waveform systematics for binary neutron star gravitational wave signals: effects of the point-particle baseline and tidal descriptions,” Phys. Rev. D 98, 124030 (2018) [arXiv:1810.03936 [gr-qc]]. Messina:2019uby F. Messina, R. Dudi, A. Nagar and S. Bernuzzi, “Quasi-5.5PN TaylorF2 approximant for compact binaries: point-mass phasing and impact on the tidal polarizability inference,” Phys. Rev. D 99, 124051 (2019) [arXiv:1904.09558 [gr-qc]]. Samajdar:2019ulq A. Samajdar and T. Dietrich, “Waveform systematics for binary neutron star gravitational wave signals: Effects of spin, precession, and the observation of electromagnetic counterparts,” Phys. Rev. D 100, 024046 (2019) [arXiv:1905.03118 [gr-qc]]. Agathos:2019sah M. Agathos, F. Zappa, S. Bernuzzi, A. Perego, M. Breschi and D. Radice, “Inferring Prompt Black-Hole Formation in Neutron Star Mergers from Gravitational-Wave Data,” Phys. Rev. D 101, 044006 (2020) [arXiv:1908.05442 [gr-qc]]. Landry:2020vaw P. Landry, R. Essick and K. Chatziioannou, “Nonparametric constraints on neutron star matter with existing and upcoming gravitational wave and pulsar observations,” Phys. Rev. D 101, 123007 (2020) [arXiv:2003.04880 [astro-ph.HE]]. Chen:2020fzm A. Chen, N. K. Johnson-McDaniel, T. Dietrich and R. Dudi, “Distinguishing high-mass binary neutron stars from binary black holes with second- and third-generation gravitational wave observatories,” Phys. Rev. D 101, 103008 (2020) [arXiv:2001.11470 [astro-ph.HE]]. Chatziioannou:2021tdi K. Chatziioannou, “Uncertainty limits on neutron star radius measurements with gravitational waves,” Phys. Rev. D 105, 084021 (2022) [arXiv:2108.12368 [gr-qc]]. Kunert:2021hgm N. Kunert, P. T. H. Pang, I. Tews, M. W. Coughlin and T. Dietrich, “Quantifying modelling uncertainties when combining multiple gravitational-wave detections from binary neutron star sources,” Phys. Rev. D , L061301 (2022) [arXiv:2110.11835 [astro-ph.HE]].
http://arxiv.org/abs/2307.01906v1
20230704202947
Complex Graph Laplacian Regularizer for Inferencing Grid States
[ "Chinthaka Dinesh", "Junfei Wang", "Gene Cheung", "Pirathayini Srikantha" ]
eess.SP
[ "eess.SP" ]
An 𝔰𝔩_2 action on link homology of T(2,k) torus links Felix Roz August 1, 2023 ===================================================== In order to maintain stable grid operations, system monitoring and control processes require the computation of grid states (e.g. voltage magnitude and angles) at high granularity. It is necessary to infer these grid states from measurements generated by a limited number of sensors like phasor measurement units (PMUs) that can be subjected to delays and losses due to channel artefacts, and/or adversarial attacks (e.g. denial of service, jamming, etc.). We propose a novel graph signal processing (GSP) based algorithm to interpolate states of the entire grid from observations of a small number of grid measurements. It is a two-stage process, where first an underlying Hermitian graph is learnt empirically from existing grid datasets. Then, the graph is used to interpolate missing grid signal samples in linear time. With our proposal, we can effectively reconstruct grid signals with significantly smaller number of observations when compared to existing traditional approaches (e.g. state estimation). In contrast to existing GSP approaches, we do not require knowledge of the underlying grid structure and parameters and are able to guarantee fast spectral optimization. We demonstrate the computational efficacy and accuracy of our proposal via practical studies conducted on the IEEE 118 bus system. § INTRODUCTION The modern electric grid is composed of intermittent generation sources and variable consumer demands that push the system to operate close to its limits. Elevated situational awareness where operators are able to rapidly infer the states of every electrical node (, bus) at highly granular intervals is necessary to ensure reliable grid operations. A costly approach would be to deploy sensors such as phasor measurement units on every bus <cit.>. Instead, using a limited number of sensors from which the electrical states of all nodes in the grid are inferred is a more efficient solution <cit.>. Other factors that can reduce the availability of grid measurements include adversarial attacks (, denial of service) and/or issues in the communication layer (, delays, packet losses). Traditional state estimation processes require knowledge of the underlying grid topology and parameters. Also, these must be supplied with at least 2N sensor measurements (where N is the total number of buses in the grid) that satisfy the observability criteria to allow for accurate state estimation <cit.>. Moreover, due to the non-linearity of the mapping between grid measurements and states, linear state estimation (, DC state estimation) is widely utilized to support practical system operations. However, as numerous assumptions that include voltage magnitude of all buses being 1 p.u. are made which are not valid in practice, the estimated states will not accurately reflect the actual operation conditions of the grid <cit.>. In this paper, we leverage on the historical grid data available to infer grid states from a small subset of grid measurements using a graph signal processing (GSP) approach <cit.>. Our proposal is composed of two stages. In the first stage, a sparse complex-valued inverse covariance matrix ∈ℂ^N × N (interpreted as a graph Laplacian) is learnt from the grid dataset. encodes the underlying state inter-dependencies of the grid, without requiring knowledge of the grid structure and its parameters (, bus admittance matrix). To learn , we extend a previous CLIME formulation <cit.> to the complex-valued case, which we transform into a linear program (LP) that is solvable offline via a state-of-the-art LP solver[A representative state-of-the-art general LP solver is <cit.>, which has complexity 𝒪(N^2.055).]. Importantly, we construct complex-valued matrix to be Hermitian, which by the Spectral Theorem <cit.> means that it has real eigenvalues. This enables us to define a complex graph Laplacian regularizer (GLR) <cit.> that computes a real value ^H∈ℝ from a complex-valued signal ∈ℂ^N. In the second part, we combine GLR together with an ℓ_2-norm fidelity term, forming a quadratic objective to estimate the missing grid states. The solution is a linear system that is efficiently computed in linear time via conjugate gradient (CG) <cit.>. Existing works that pursue a GSP approach to interpolate missing grid states mainly differ from our proposal in how a graph variation opperator is chosen to represent the grid (, matrix). Specifically, reference <cit.> utilizes electrical distances to construct this matrix, reference <cit.> leverages on the real components of the nodal admittance matrix, and references <cit.> utilize the entire complex-valued nodal admittance matrix to define the graph Laplacian matrix. The main issues with these techniques are that all of these require the underlying grid topology and/or parameters. In approaches that construct a real-valued Laplacian matrix, these either utilize only part of the admittance values or partial information about the grid (, electrical distances). This limits the information embedded in the graph Laplacian about practical electrical inter-dependencies amongst the grid states. In other cases such as <cit.>, the entire nodal admittance matrix is utilized, and although this is a complex symmetric matrix, it is not Hermitian. This means that the Spectral Theorem <cit.> does not apply, and the matrix is not guaranteed to be eigen-decomposible[Consider an example of a complex symmetric matrix = [1  i;  i  0], which is not eigen-decomposible. See the last paragraph in Section <ref> for a detailed comparison of our work with <cit.>.]. To summarize, in this paper, there are three main contributions made: 1) A LP is formulated to use existing grid measurement datasets to learn a sparse, complex-valued but Hermitian graph Laplacian matrix that is representative of the actual electrical inter-dependencies in the grid, ensuring that its eigenvalues are real; 2) A quadratic objective is formulated using a complex GLR to infer the missing grid states, which can be solved in linear time; and 3) Experimental studies along with comparative studies are conducted on practical 118-bus system to demonstrate the efficacy of the proposed graph learning and grid signal interpolation method. To the best of our knowledge, we are the first to propose the learning of a sparse, complex-valued and Hermitian graph Laplacian matrix with real eigenvalues for the grid state interpolation problem. The paper is organized as follows. Sec. <ref> consists of fundamental definitions and notations associated with the graph signal processing literature. Sec. <ref> details the proposed graph learning problem formulation and transformation into a linear program. Sec. <ref> presents the grid signal interpolation problem formulation and iterative solution computation based on CG. Sec. <ref> presents the experimental and comparative studies evaluating the proposed algorithm. Finally, the paper is concluded in Sec. <ref>. § GRAPH SPECTRUM §.§ GSP Definitions First, we present the notations used in this paper. Vectors and matrices are denoted by lower-case (, ) and upper-case (, ) boldface letters. _N denotes an identity matrix of dimension N × N, _N (_N) denotes an all-one (all-zero) vector of length N, and _N,N denotes an all-zero matrix of dimension N × N. Next, we review basic definitions in GSP <cit.>. Conventionally, an undirected graph (,,) has a set = {1, …, N} of N nodes connected by edges in set , where edge (i,j) ∈ has edge weight w_i,j = W_i,j specified in symmetric adjacency matrix ∈ℝ^N × N. Diagonal degree matrix ∈ℝ^N × N has diagonal entries D_i,i = ∑_j W_i,j. Symmetric combinatorial graph Laplacian matrix (graph Laplacian for short) is Ł≜ -. If self-loops (i,i) ∈ exist, then the symmetric generalized graph Laplacian matrix ≜ - + diag() is typically used. §.§ Hermitian Graph and its Spectrum To process complex-valued signal ∈ℂ^N (, voltage phasors for N buses), we extend conventional undirected graphs to a new directed graph notion called Hermitian graphs. First, each edge [i,j] ∈ is now directed and endowed with a complex-valued edge weight w_i,j∈ℂ; the associated edge [j,i] ∈ in the opposite direction connecting the same two nodes i and j has weight w_j,i that is the complex conjugate of w_i,j, , w_j,i = w_i,j^H. See Fig. <ref> for an example of a Hermitian graph with three nodes and the corresponding graph Laplacian . Note that the diagonal terms of are real, while the off-diagonal terms are in complex conjugate pairs, , _i,j = _j,i^H. A Hermitian graph means that the corresponding adjacency matrix ∈ℂ^N × N and subsequently graph Laplacian Ł∈ℂ^N × N (or generalized Laplacian ∈ℂ^N × N) are Hermitian; , Ł = Ł^H (or = ^H). By the Spectral Theorem <cit.>, eigen-pairs {λ_k, _̌k} of Hermitian Ł (or ) have real eigenvalues λ_k's and mutually orthogonal eigenvectors _̌k's, , _̌k^H_̌l = δ_k-l, where δ_k is the discrete impulse. Thus, we can define a Hilbert space for complex length-N vectors {∈ℂ^N} with an inner product ⟨,̆⟩̌≜^̌H$̆ that defines orthogonality—{_̌k}is a set of orthogonal basis vectors spanning<cit.>. A signal∈ℂ^Ncan be spectrally decomposed to= ^H , where thek-th graph frequency coefficient isα_k = ⟨, _̌k ⟩= _̌k^H . Further, because{_̌k}are eigenvectors of HermitianŁ, they are successive orthogonal arguments that minimize the Rayleigh quotient^̌H Ł$̌. Thus, interpreting Ł as the precision matrix of a Gaussian Markov Random Field (GMRF) model <cit.>, , the probability[We show in Section <ref> that given Hermitian and positive semi-definite (PSD) matrix Ł, GLR ^HŁ∈ℝ_+ for any ∈ℂ^N, and thus exp(-^HŁ) is within real interval [0,1], and hence can be interpreted as a probability.] Pr() of is Pr() ∝exp( - ^HŁ). This means that _̌1 is the most probable signal, _̌2 is the next most probable signal orthogonal to _̌1, etc. Hence, using GLR ^HŁ as a signal prior means a bias towards low-frequency signals that are also the most probable. We contrast our definition of graph frequencies using Hermitian Laplacian Ł with one in <cit.>, which claimed that a complex symmetric graph shift operator (GSO) matrix can be eigen-decomposed as = ^⊤ by Theorem 4.4.13 in <cit.>. First, the cited theorem states only that symmetric matrix is diagonalizable, , = ^-1 and is a diagonal matrix, iff it is complex orthogonally diagonalizable, , = ^⊤. Instead, we conjecture that the intended theorem is 4.4.4 (Takagi factorization), which states that a complex symmetric matrix can be decomposed to = ^⊤, where are the set of orthonormal eigenvectors for ^H. While the vectors {_̆k} span the space of complex vectors {∈ℂ^N}, they are not eigenvectors of , and hence are not successive arguments that minimize the Rayleigh quotient ^̆H$̆. Thus, prior^H would not promote low-frequency signal construction (first eigenvectors of). Further, sinceis not Hermitian, prior^H is not real in general and thus not amenable to fast optimization. § HERMITIAN GRAPH LEARNING §.§ Computing Sparse Complex Precision Matrix as LP Our first goal is to estimate a sparse complex-valued precision matrix∈ℂ^N ×N—interpreted as a graph Laplacian matrixto a Hermitian graph—from observation matrix= [_1, …, _K] ∈ℂ^N ×K, where_k ∈ℂ^Nis thek-th zero-mean signal observation (, there is a training dataset composed ofKdatapoints). Our formulation generalizes the original CLIME formulation <cit.> that seeks a real matrix∈ℝ^N ×N, given real empirical covariance matrix∈ℝ^N ×N. Specifically, the CLIME formulation is min_  _1,     - _N _∞≤ρ whereρ∈ℝ_+is a parameter. In a nutshell, (<ref>) seeks a sparse(promoted by theℓ_1-norm in the objective) that approximates the right inverse[Note that in general there exist many right inverses = given positive definite matrix , while a matrix inverse ^-1 is unique.] of(enforced by the constraint). Eachi-th column_iofin (<ref>) can be solved independently as min__i_i_1,    _i - _i_∞≤ρ where_iis thei-th column of_N—one-hot (canonical) vector with1at thei-th entry and0elsewhere. (<ref>) can be solved as a linear program (LP) using an off-the-shelf LP solver such as <cit.> with complexity𝒪(N^2.055). The obtained solutionfrom (<ref>) is not symmetric in general, and <cit.> post-computes a symmetric approximation^o ←(+ ^⊤)/2. We generalize CLIME to compute a complex-valued∈ℂ^N ×Nfrom observation∈ℂ^N ×K. We first compute empirical covariance matrix= 1/N ^H ∈ℂ^N ×N. Optimization variablehas real and imaginary parts,= ^R + j ^I, where^R, ^I ∈ℝ^N ×N. Our final solutionmust be Hermitian, which implies that its eigenvalues are real by the Spectral Theorem <cit.>. This also means that^Ris symmetric while^Iis anti-symmetric: ^R = (^R)^⊤,    ^I = - (^I)^⊤ . Note that anti-symmetry in^Imeans thatP^I_m,m = 0, ∀m. Like CLIME, we do not impose (anti-)symmetry on^Rand^Iduring optimization; instead, given optimized^Rand^I, we post-compute final solution as^R* ←(^R + (^R)^⊤)/2and^I* ←(^I - (^I)^⊤)/2to ensure matrix (anti-)symmetry. To handle complex_i, we generalize the objective in (<ref>)_i_1 = ∑_n p_i,n_1, wherep_i,n_1is now the Manhattan distance forp_i,n ∈ℂ, ,p_i,n_1 = |p_i,n^R| + |p_i,n^I|. To retain a linear objective, we define upper-bound auxiliary variables^Rand^Iwith linear constraints: ^R ≥± ^R_i,      ^I ≥± ^I_i where^R ≥± ^R_imeans^R ≥^R_iand^R ≥- ^R_i. Vector inequality here means entry-by-entry inequality, ,^R ≥^R_imeansp̅^R_n ≥p^R_i,n, ∀n. Thus, the objective becomes ∑_np̅^R_n + p̅^I_n = _N^⊤^R + _N^⊤^I . We next generalize the constraint term_i - _i_∞, where_∞= max_n v_n _1, andv_n_1is the aforementioned Manhattan distance for complexv_n. Expanding_i - _iinto its real and imaginary components: _i - _i = (^R + j ^I)(^R_i + j ^I_i) - _i = (^R ^R_i - ^I ^I_i - _i) + j (^R ^I_i + ^I ^R_i) . We now define upper-bound auxiliary variables^Rand^Iwith corresponding linear constraints: ^R ≥±( ^R ^R_i - ^I ^I_i - _i ),    ^I ≥±( ^R ^I_i + ^I ^R_i ) . Thus, the constraint of the optimization_i - _i_∞ ≤ρbecomes ^R + ^I ≤ρ_N . Summarizing, the LP formulation to compute complex_iis min__i, , _N^⊤^R + _N^⊤^I,  {[ ^R ≥±^R_i,   ^I ≥±^I_i; ^R ≥± (^R ^R_i - ^I ^I_i - _i); ^I ≥± (^R ^I_i + ^I ^R_i); ^R + ^I ≤ρ_N ]. . Note that ifis real-valued, then^I = _N,N, and^I = ^I = _Nto minimize objective while easing the last constraint to^R ≤ρ_N. Hence, (<ref>) defaults to an LP for original CLIME (<ref>) as expected. §.§ Complex GLR Having computed^Rand^Ivia (<ref>) (and post-computing^R*and^I*to ensure they are symmetric and anti-symmetric, respectively), we interpret Hermitian^* = ^R* + j^I*as the Laplacian matrix of a Hermitian graph, ,= ^*. We show next that the complex GLR^H , where∈ℂ^N, is a real-valued quantity, and thus amenable to fast optimization as a quadratic signal prior. We first note that the theoretical covariance matrix(for zero-mean random vector∈ℂ^N) is Hermitian: ^H = ( E[^H] )^H = E[^H] = . Thus, by the Spectral Theoremhas mutually orthogonal eigenvectors{_̌k}with real eigenvalues{γ_k}, and we can write positive definite (PD)and its inverse = ^-1as = ∑_k γ_k _̌k _̌k^H,       = ∑_k λ_k _̌k _̌k^H whereλ_k = γ_k^-1 ∈ℝ. Thus, letting= , ^H = ∑_k λ_k ^H_̌k _̌k^H = ∑_k λ_k _̌k^H^2_2 ∈ℝ . Given that our computed precision matrix^*is also Hermitian, the same argument in (<ref>) can show complex GLR^H is also real-valued in practice when= ^*. § GRAPH-BASED INTERPOLATION §.§ Problem Formulation Having established graph Laplacian, we formulate our complex-valued grid state interpolation problem as follows. First, define a sampling matrix∈{0,1}^M ×Nthat selectsMentries corresponding to observation∈ℂ^Mfrom sought state vector∈ℂ^N, whereM < N. For example,that picks out the second and fourth entries from four nodes is = [ [ 0 1 0 0; 0 0 0 1 ]] . We formulate a quadratic objective by combining anℓ_2-norm fidelity term with our proposed complex GLR as min_ - ^2_2 + μ ^H , whereμ∈ℝ_+is a real-valued parameter trading off the fidelity term with GLR. Note that because both- ^2_2=(- )^H (- )and complex GLR^H are real, minimization of a real-valued objective (<ref>) is well-defined. The solution^*to (<ref>) can be obtained by solving the following linear system: ( ^⊤ + μ) ^* = ^⊤ where coefficient matrix is= ^⊤+ μ. Givencomputed from (<ref>) is PD in general,is also PD by Weyl's inequality <cit.>. Becauseis sparse, Hermitian and PD,^*in (<ref>) can be obtained without matrix inversion using conjugate gradient (CG) <cit.> in roughly linear time. §.§ Summary of Proposal The proposed graph learning and interpolation is summarized in Alg. <ref>. First,Ksignal observations{_k}are used to compute empirical covariance matrix= 1/N ^H. Then, the LP in (<ref>) is solvedNtimes to obtainNcolumns{_i}of precision matrixusing an off-the-shelf solver such as <cit.>. We set graph Laplacianto be^* = ^R* + j ^I*, where^R*and^I*are post-processed to be symmetric and anti-symmetric, respectively. The sampling matrixis set based on the samples that are available to be used for interpolation. Finally, CG <cit.> is used to solve the linear system in (<ref>) to obtain all grid states^*. § EXPERIMENTS In this section, we conduct practical experiments to evaluate the performance of our proposed grid state interpolation method on the IEEE 118 bus system. 15,000 observations are generated using MATPOWER that is run in MATLAB. The power demands are sampled uniformly in the range of [80% to 120%] of the default demand values as is common practice in existing literature <cit.>. The generation mix considered includes renewables and traditional bulk generation sources. Wind and solar power sources compose of 8.4% and 2.3% of overall generation in the United States <cit.> and the same proportion is applied in our simulations. Each wind farm is assumed to be composed of turbines representing a total of 1800 m^2swept area. Each solar farm consists of 10,000 photovoltaic panels (1.67 m^2per panel). Uncertainties in wind speed and solar generation are modeled using the Weibull probability distribution with the shape and location parameters taking values of (2, 5) and (2.5, 6), respectively, in a manner similar to reference <cit.>. Each observation is composed of the complex voltage phasor of 118 buses. Alg. <ref> is implemented on a MacBook Pro Apple M1 Chip with 8-core CPU, 8-core GPU, 16-core Neural Engine, and 16GB unified memory. In order to compute the Hermitian matrixas outlined in Sec. <ref>, 5000 observations are randomly selected to compute the empirical covariance matrix. The remaining 10000 observations are utilized to test the interpolation of missing grid states. Then, in the interpolation of the missing grid states, we randomly select30,40,50,60,70,80,90,100samples out of118samples to reconstruct the remaining missing grid states (, if30samples are utilized, then88samples are inferred/interpolated by solving the unconstrained quadratic program listed in Sec. <ref>. The mean square error of the inferred grid states from the ground truth is computed for the 10,000 samples along with the 95% confidence interval and listed in Tables <ref> and <ref> for voltage magnitude and phase angles respectively. It is clear that our proposal has very low error margins (i.e. within 2.3% even when only 25% of grid measurements are available for interpolating the remaining state variables. This is significantly more efficient than traditional state estimation processes that require as many samples as the number of states being inferred. We examine the effect of varying the number observations in constructing thematrix on the accuracy of the interpolation of missing grid states. Intuitively, one can expect the more observations that included, the more reflective will the learntmatrix will be of the electrical interdependencies in the system. The number of samples is fixed to be 100, and the MSE error of state interpolation is tested with different number of observations in constructing. The result is shown in Fig. <ref>, in which the average MSE error gets lower as the number of observations rise. As matrixis computed once offline, the computational overhead incurred will not affect the interpolation processes. In Table <ref>, we list the average total execution time for the interpolation algorithm. We are able to compute the missing grid states within 1.5 seconds. This allows for near real-time inferences of grid states for a system as large as the IEEE 118 bus system. Our approach is conducive for near real-time monitoring of the entire grid from a small set of sample states. Finally, we conduct comparative studies with two GSP based schemes. The first scheme is an interpolation method regularized using graph total variation (GTV) <cit.>, resulting in anℓ_2-ℓ_1-norm minimization objective. For the second scheme, instead of learning complex-valued, we learned two Laplacian matrices for real and imaginary components of the signals separately, and then performed interpolation via (<ref>). The proposed method reduced the MSE by 7.68% and 9.86% on average for voltage magnitude and phase angle respectively in comparison to Scheme 2, while the proposed method is comparable to Scheme 1. Average execution times are shown in Table <ref>. The proposed method is noticeably faster than both competing schemes: by 17% and 18%, respectively. Furthermore, as thematrix is Hermitian, we can ensure stable computation, while reference <cit.> utilizes the nodal admittance matrix that may have complex eigenvalues and may not be eigen-decomposible. Our proposal also does not require knowledge of the underlying system topology and parameters. § CONCLUSION In this paper, we have presented a novel GSP based approach to interpolate a small number of grid measurements samples to compute the voltage magnitude and phase angles of all buses in the grid from which other grid states such as real and reactive power flows can be computed. Our proposal does not require knowledge of the underlying grid structure and electrical parameters. Further, we have demonstrated in our experiments that significantly smaller number of samples (e.g. only 25% of grid states can be utilized to reconstruct all grid states) are required to effectively interpolate grid states in comparison to traditional state estimation techniques. We demonstrate the performance of our proposal in terms of accuracy and computational time by comparing our scheme with another GSP approach proposed recently in the literature and a primal-dual algorithm based method for solving the formulated optimization problems. IEEEbib
http://arxiv.org/abs/2307.05731v1
20230706132800
Social human collective decision-making and its applications with brain network models
[ "Thoa Thieu", "Roderick Melnik" ]
physics.soc-ph
[ "physics.soc-ph", "q-bio.NC" ]
Social human collective decision-making and its applications with brain network models Thoa Thieu MS2Discovery Interdisciplinary Research Institute, Wilfrid Laurier University, Waterloo, Ontario, Canada, [email protected] Roderick Melnik MS2Discovery Interdisciplinary Research Institute, Wilfrid Laurier University, Waterloo, Ontario, Canada & BCAM - Basque Center for Applied Mathematics, Bilbao, Spain [email protected] * Thoa Thieu and Roderick Melnik ================================== * A better understanding of social human dynamics would be a powerful tool to improve nearly any computational endeavour that involves human interactions. This includes intelligent environments featuring, for instance, efficient illumination systems, smart evacuation signalling systems, intelligent transportation systems, crowd control, or disaster response. Moreover, given that the human population has significantly grown up in number and spread across the planet, the capacity to predict social human behaviours will help to demonstrate special behavioural forms observed when masses of people gather together and make crowds. Additionally, human crowd dynamics are characterized by complex psychological and sociobiological behaviour. The contributions of psychological factors need to be accounted for to obtain more reliable models. Many models have been proposed to describe the social human group dynamics in different scenarios. However, due to the complexity of such systems amplified by the above factors, social human decision-making with multiple choices has not been fully scrutinized. In this chapter, we consider probabilistic drift-diffusion models and Bayesian inference frameworks to address this issue, assisting better social human decision-making. We provide details of the models, as well as representative numerical examples, and discuss the decision-making process with a representative example of the escape route decision-making phenomena by further developing the drift-diffusion models and Bayesian inference frameworks. In the latter context, we also give a review of recent developments in human collective decision-making and its applications with brain network models. Furthermore, we provide illustrative numerical examples to discuss the role of neuromodulation, reinforcement learning in decision-making processes. Finally, we call attention to existing challenges, open problems, and promising approaches in studying social dynamics and collective human decision-making, including those arising from nonequilibrium considerations of the associated processes. § INTRODUCTION In recent years, many models have been proposed to describe the decision-making of human crowd dynamics in different scenarios. The fields of human crowd dynamics consist largely of works not only in mathematics, scientific computing, engineering, and physics but also in social psychology. The detailed behavior of human crowds is already complicated. Many physiological and sociobiological processes act with the physical feedback mechanism effects caused by the surrounding environments, which are still largely unknown. To develop reliable human crowd dynamic models, the contributions of psychological factors need to be accounted for. There is overwhelming evidence in support of this. For example, panic in crowd dynamics is often caused by attempting escape of pedestrians from threats in situations of a perceived struggle for survival or eventually ending up in trampling or crushing. Hence, decision-making with many social learning processes, including opinion dynamics, is essential in studying human group movements. In general, models of human decision-making from the brain research point of view are important in studying cognitive psychology and have been successfully used to fit experimental data and relate them to neurophysiological mechanisms in the brain. This research direction raises a relevant question what is brain neuroscience revealing about social decision-making? Recall now that one of the most important models for binary decision-making is the drift-diffusion model (DDM) <cit.>. On the other hand, DDM includes a diffusion process that models the accumulation of perceived evidence and yields the decisions upon reaching specified thresholds <cit.>. On the other hand, DDMs may bridge the gap between experiments of decision-making and neurobiologically motivated models that describe how the decision-making process is implemented in the brain. In such models, evidence, in the form of sensory information, enters competing neural networks, mimicking the work of real brain networks. For instance, information encoded by (approximately Poissonian) spike trains of neurons are accumulated by a neural population. This accumulation can be approximated by an Ornstein–Uhlenbeck process <cit.>. However, the next level of complexity comes from the fact that the characteristics of human group dynamics are affected by the decisions of each individual in the group. In making such decisions, and in particular when the decision context involves certain degrees of uncertainty, humans tend to utilize all possible sources of information notably social information. Humans often resort to the decisions made by others as additional sources of information to improve their decision-making. This could potentially link to opinion and/or learning dynamics among human groups <cit.>. When this process is followed by a tendency to ‘imitate’ the majority's decision, it could lead to unplanned coordination of the actions, often referred to as ‘herd’ behaviour <cit.>. The Bayesian model provides an appropriate tool to explain how the brain extracts information from noisy input as typically presented in perceptual decision-making tasks. Additionally, it is now known that the DDM has a relationship with such functional Bayesian models <cit.>. We also know that the brain infers our spatial orientation and properties of objects in the world from ambiguous and noisy sensory cues. Moreover, the recognition of self-motion in the presence of stationary, as well as independently moving objects, offers many challenging inference problems. This is due to the fact that the image motion of an object could be attributed to the movement of the object, self-motion, or some combination of the two <cit.>. On the other hand, we note that human perception (the process whereby sensory stimulation is translated into organized experience, e.g., vestibular signals) play a critical role in dissociating self-motion from object motion <cit.>. Hence, motivated by <cit.>, one of our representative examples of the developed theory based on DDM and Bayesian approach, will be the decision-making process with application to risky escape route decision phenomena, along with the analysis of other important aspects of this process. The rest of this chapter is organized as follows. In Section <ref>, we consider probabilistic DDM and Bayesian inference frameworks for social human decision-making, where we provide details of the models, their brief survey, and theoretical foundations. Section <ref> is devoted to representative numerical examples, where we discuss the decision-making process in the risky escape route decision phenomena by considering the drift-diffusion models and Bayesian inference frameworks. Section <ref> provides a review of recent results on human collective decision-making. Section <ref> focuses on a numerical example and discusses the role of neuromodulation and reinforcement learning processes in studying decision-making processes. Section 6 is devoted to considering decision-making processes as nonequilibrium, similar to learning and knowledge creation, by focusing on human biosocial dynamics with complex psychological behaviour and nonequilibrium phenomena. Conclusions are given in Section <ref>, where we highlight open problems and future research. § DDMS AND BAYESIAN MODELS FOR DECISION-MAKING §.§ DDMs in probabilistic settings While DDMs have been used in applications for a long time <cit.>, their generalization to human decision-making problems is of recent origin. The probabilistic DDM is described by sequential sampling with diffusion signals with Brownian motions. In particular, in probabilistic DDM, a decision is made by the following process: * First, the decision maker accumulates evidence until the process hits either an upper or lower stopping boundary and then stops; * Second, the decision is made by choosing the alternative that corresponds to that boundary. Unlike many other applications, these problems require considerations of stochastic dynamics with boundary conditions <cit.>. Recently, the description of the decision-making processes in neuroscience and psychology has been proposed with probabilistic DDMs <cit.>. For t ∈ (0, ∞), the two main ingredients of our probabilistic DDM are the stochastic process X(t) and a boundary function. Let us define the following system of drift-diffusion equations modelling the decision-making process as follows: d X(t) = μ(X(t))dt + σ(X) dB(t), where μ∈ C([0, ∞) ×ℝ) represents the drift, while σ∈ (0,∞) is diffusion coefficient. The term B(t) denotes the standard Brownian motion. The initial condition for the system (<ref>) are X(0) = x_0. For all x_0∈ [b(0), b̃(0)], we define the following hitting times of the boundaries α, β: α = inf{t ≥ 0: |X(t)| ≤ b(t) }, β= inf{t ≥ 0: |X(t)| ≥b̃(t) }, where b, b̃∈ C^1([0,∞)) satisfy b ≤b̃. The presentation (<ref>) defines the first time when the absolute value of the process X(t) hits the boundary b. In some cases, we can choose a specific boundary condition, e.g. reflecting boundary conditions or absorbing boundary conditions. In particular, the authors in <cit.> have discussed the reflecting boundary conditions for a system of SDEs with applications in neuroscience. One of the reasons for a renewal of interest in drift-diffusion models in analyzing complex systems that includes human dynamics and behaviours is due to their statistical mechanics' foundations. They maintain a prominent role in a hierarchy of mathematical models derived from the Liouville equation for the evolution of the position-velocity probability density, representing the continuing interest of scientists in various areas of theory and applications <cit.>. Similar type equations have also been discussed in the realm of open systems preserving necessary thermodynamic consistency(e.g., <cit.>). Moreover, under known simplifying assumptions, the derivation of the drift-diffusion model can further be rigorously justified, starting from a version of the Hilbert expansion. While other models within the mentioned hierarchy have also been used in crowd dynamics and related areas of active interacting particles (e.g., <cit.> and references therein), we believe that for the field of interest here, it is essential to explore further the potential of probabilistic drift-diffusion models integrated with the Bayesian framework. Consolidating knowledge between mathematical modelling and cognitive science is necessary in this undertaking. In the context of collective human decision-making in particular and the collective behaviour of living species in general, the above model hierarchy and statistical mechanics play a critical role. One of the points of entry of these ideas into the description of collective dynamics has traditionally been DDM models discussed in Section 1 as they allow us to build a bridge to psychological factors and brain dynamics (see also <cit.>, a recent review <cit.>, and references therein). In the following sections, we provide further details on how sensory observations and subsequent learning via brain networks can be linked to the model discussed here. §.§ Bayesian models for decision-making An intrinsic relationship between probabilistic drift-diffusion and Bayesian models has been emphasized in neuroscience literature for quite some time now, with a strong advocacy for their applications in the modelling of decision-making and learning processes, including reinforced learning (see, e.g., <cit.>). One of the critical class of Bayesian models is the Bayesian model for concrete sensory observations. To recognize a presented stimulus a Bayesian model compares predictions, based on a generative model, to the observed sensory input. Similar to brain networks such generative models include certain distribution of the data itself. Through Bayesian inference, this comparison leads to belief values indicating how probable it is that the stimulus caused the sensory observations. Note that this is conceptually different from the DDM where the decision process accumulates random pieces of evidence and there is no explicit representation of raw sensory input <cit.>. Therefore, a combination of these modelling approaches can be beneficial in practice (e.g., <cit.>). A Bayesian model is more complex than the probabilistic DDM. There are 4 required components <cit.>: (i) The generative input process (reflecting the physical environment) which generates noisy observations of stimulus features just as those used in the actual experiment. (ii) The internal generative models of decision makers which mirror the generative input process under the different, hypothesized decision alternatives. (iii) The inference mechanism which translates observation from (i) into posterior beliefs over the correctness of the alternatives using the generative models (ii). (iv) A decision policy that makes decisions based on the posterior beliefs from (iii). Bayesian models can be extended to include fractional Brownian motions <cit.>, in which case an extension of model (<ref>)-(<ref>) would also be required. Other extended DDMs for decision-making and learning have also been proposed (e.g. <cit.>). §.§.§ Input process and observational sensory information for decision-making In the brain, the sensory observations are reflected by the input process. In particular, sensory observations such as visuals, are reflected in an input translated into simple, noisy feature values used for decision-making. Assume that the observational sensory processes are drawn from a Gaussian distribution whose parameters we will infer from the behavioural data. In particular, we introduce the following input process with Gaussian distribution X_t ∼𝒩(μ_i, Δ t σ^2), where μ_i is the feature value which the brain would extract under perfect noise-free observation conditions. Here, Δ t σ^2 is the variance representing the coherence of the dots (more significant variance equals smaller coherence) together with physiological noise in the brain. While our better understanding of transforming sensory inputs into percepts represents one of the principal goals in neuroscience <cit.>, the above framework assists us in formally integrating the knowledge with incoming sensory information. Knowledge creation is a complex process requiring an adequate mathematical framework, and the interested reader can consult <cit.> for further steps in that direction and a recent survey on related issues <cit.>. In what follows, this issue is addressed via the generative modelling approach under the assumption that the structure of internal representations in the brain replicates the design of the generative process by which the input process and observational sensory information influence it <cit.>. §.§.§ Generative models in Bayesian cognitive science One of the key ingredients of applications of the free-energy principle to neuroscience and biological systems is active inference with a decisive role played by generative models <cit.>. The latter provides a guidance on how sensory observations are generated and how probability-density-based prior beliefs of a cognitive system (e.g., an individual or collective humans) about its environment and other information are controlled. Such generative models enter prominently the Bayesian framework in cognitive science where the underlying idea is that cognitive processes, including those playing the central role in decision-making, are underwritten by predictions based on inferential models <cit.>. Assume that the decision maker aims to adapt its internal generative models to match those of the input process. We introduce the following generative model of an abstracted observation X_t for an alternative A_i as Gaussian densities p(X_t|A_i)= 𝒩(μ̂_i, Δ_t σ̂^2), where μ̂_i is the mean, while Δ_t σ̂^2 represents the internal uncertainty of the decision maker's representation of its observations. This approach is frequently used in what is now termed as Bayesian neurophysiology <cit.> as it allows us to empirically explain many important brain functions in terms of Bayesian inference. Its formal definition is given next. §.§.§ Bayesian inference for decision-making processes The active inference is one of the main components of the Bayesian models. In this Bayesian inference, there is a posterior belief p(A_i| X_t) that alternative A_i is true given observation X_t. In the perceptual decision-making process, where observations x_t arrive sequentially over time, a key quantity is a posterior belief p(A_i|X_1:t) where X_1:t = {X_1, …, X_t} collects all observations up to time t <cit.>. Then, this posterior belief could be computed recursively over time using Bayesian inference as follows (see more detail in e.g. <cit.>): p(A_i| X_1) = p(X_1|A_i)p(A_i)/∑_j=1^M p(X_1|A_j)p(A_j) p(A_i| X_1:t) = p(X_t|A_i) p(A_i| X_1:t-1)/∑_j=1^Mp(x_t|A_j)p(A_j|X_1:t-1), where M represents the number of considered alternatives. Here, the equations (<ref>)-(<ref>) imply that the posterior belief of alternative A_i is calculated by weighting the likelihood of observation X_t under alternative A_i with the previous posterior belief and normalizing the result. At the initial time step the previous belief is the prior belief over alternatives p(A_i) which can computed biases over alternatives. The development of the concept of active inference goes hand in hand with recent advances in neuroscience, allowing the characterization of brain functions based on mathematical formalisms and first principles. As a result, the application of this approach grows, including the areas critical for collective human interfaces and decision-making such as Neurorobotics and Artificial Intelligence (AI) <cit.>. §.§ Decision policy for decision-making processes In Bayesian models, decisions lie in the posterior belief p(A_i| X_1:t). Then, a decision is made for the alternative with the largest posterior belief when any of the posterior beliefs reaches a predetermined bound λ, which reads (see, e.g., in <cit.>): max_i p(A_i|X_1:t) ≥λ. On the other hand, the decision variables with the posterior beliefs can also be describe by the following formula: max_i log p(A_i|X_1:t) ≥λ', where λ' is the posterior belief bound, while p(A_i|X_1:t) is determined as in (<ref>). We have shown the general picture of Bayesian models for decision-making processes. In order to understand further the Bayesian inference in modelling decision-making processes, we are also interested in how the brain extracts information from the sensory input signal that leads to decision-making. Hence, we will discuss further this approach through an example in the next section, where we use the Bayesian inference method based on the generalized linear models to describe an escape route decision scenario. In what follows, we will provide a few representative examples which we will use to demonstrate the application of the theoretical framework discussed in the previous section. § EXAMPLES §.§ State-of-the-art in modelling risky decision-making People make risky decisions during fire evacuations such as moving through the smoke. Although a shortcut in a smoky area may help individuals evacuate quickly, it is still dangerous. A wrong decision in choosing the escape route during fire evacuations could lead to the risk of injuries or death <cit.>. These earlier studies investigated the effects of smoke levels, individual risk preference, and neighbor behaviour on individual risky decisions to take a smoky shortcut for evacuations. We note further that to respond to indoor fires, people often choose to evacuate from hazardous buildings to a safe place. Hence, evacuation route selection plays a critical role in determining the evacuation efficiency and whether evacuees can leave a hazardous area safely. In a perfect world, people would rationally avoid ongoing or imminent hazards when selecting a route to escape quickly. However, risk-taking behavior is widely observed during evacuations. In a building fire, people may be unaware of or underestimate the danger such as smoke and take a risky route for evacuations. Taking a shortcut in a hazardous area such as a smoky corridor or stairs is a typical risk-taking behavior. In particular, in emergency situations, e.g. fire evacuations such as moving through smoke, humans could make decisions and move in a panic mood. Hence, psychological factors we discussed earlier in this chapter would be playing a crucial role. In such scenarios together with the high density of smoke, one may have uncertainty illusions such as right shortcut illusions. Let us define the so-called right-shortcut illusion when the participants feel that this shortcut is the right route to lead to the fastest evacuation. Motivated by <cit.>, we consider the human decision-making process model in choosing smoky shortcuts during fire evacuations, such as moving through the smoke. When individuals have to make decisions on whether to evacuate through smoke, they have uncertainty about the accessibility of the smoky route. Hence, individuals may treat neighbor behavior as useful information when making judgments on risky route choices. However, such social influence on individual risk-based decision-making still needs experimental investigation <cit.>. In our consideration, we assume that humans have normal abilities in vision, color recognition, auditory sense, and movement. Often humans have the wrong percept <cit.>. In particular, they think their own route might be the best choice in escaping the emergency situations when the other neighbors might have other better choices of escape route; or vice versa. The illusion is usually resolved once you gain a vision of the surroundings that lets you disambiguate the routes. We asked the following question: "How do noisy sensory estimates of vision lead to uncertainty percepts of choosing the right shortcut?" In what follows, we provide representative numerical examples with DDM and Bayesian inference to address the proposed question. §.§ Numerical results with DDM for a decision-making model It is well known that there is a relationship between visualization perception and sensory cortex signals <cit.>. The major part of the brain's role is devoted to processing the sensory inputs that we receive from the world. Then, by generating spikes of activity, neurons in the sensory cortex respond to these stimuli receiving from the surrounding enviroments <cit.>. In order to demonstrate the application of the theoretical framework discussed in the previous Section <ref>, we provide numerical examples to investigate the visualization of surrounding environments and sensory processing that leads to perceptual decision-making in the human brain. Using the definition of the general DDM in (<ref>)-(<ref>), we consider the following specific model of decision-making in the case of 2 alternative choices for choosing the smoky shortcut and the other route: de(t) = -c e(t)dt + v(z)dB(t), where e is the accumulated evidence, v is our sensory cortex input already containing the noise, c is the leakage constant, while B(t) represents the standard Brownian motion, as in Section 2. Note that a decision-making threshold represents the value of the decision-making variable at which the decision is made, such that an action is selected, marking the end of the accumulation of information <cit.>. In our consideration, the decision-making threshold “Thr” is equivalent to a boundary condition. In this model, for i=1,2,3,…, the sensory cortex signal generator can be defined as follows (see, e.g., <cit.>): dv(z) = 1000γδ(z(x_i) - z(x_i-1))dt + σ dB(t), where γ, σ are constants and z(x_i) = 1/1+e^-2x_i. In this subsection, we use a DDM to model decision-making in the case of 2 alternative choices for our choosing escape route scenario in (<ref>). The numerical results reported in this subsection are obtained by using a discrete-time integration based on the Euler method implemented in Python. In particular, we use the open-source framework provided by Neuromatch academy computational neuroscience (https://compneuro.neuromatch.io/). In what follows, we use the sensory cortex signal profile provided in Fig. <ref> for all of our simulations. This sensory cortex signal has been generated by using the formula (<ref>). The numerical results reported in this subsection are based on a simple but illustrative example compared to earlier results (e.g., <cit.>). Such results aim to investigate and get better insight into perceptual decision-making in the human brain. In this context, the DDM is a well-established framework that allows us to model decision-making for two alternative choices. At the same time, our complementary development of the Bayesian inference approach for these problems is more suitable for predicting such choices from spike counts of neurons. In more detail, we will discuss the Bayesian inference approach in Subsections <ref>-<ref>. The main numerical results of our analysis obtained with this DDM are shown in Figs. <ref>-<ref>, where we have plotted the integrator (or drift-diffusion mechanism), the proportion visualization judgment, and the choosing escape route decisions. In particular, in Fig. <ref>, we have plotted the drift-diffusion mechanism as the integrated evidence of our choosing escape route system. Here, we have the threshold equal to 1. Then, in Fig. <ref>, we have plotted the proportion of escape judgment for our cognition of right escape route dynamics. These plots are used to evaluate and test our model (<ref>) - (<ref>) performance. In particular, we test our hypothesis 1-2 proposed in the previous section for different parameter combinations. Then, we evaluate how our model behaves as a function of the 3 parameters the threshold Thr, leakage constant c and noise level σ. We see that the presence of noise affects the proportion of escape route judgment. Moreover, an increase in the leakage constant, together with an increase of threshold values, leads to the proportion escape route judgment of both right shortcut and wrong shortcut decrease. However, the results presented in Fig. <ref> do not reflect exactly the properties of the right shortcut and wrong shortcut judgments. We will have a look at the following decisions on the right shortcut. In Fig. <ref>, we have plotted the escape route decisions. Our numerical results show that our hypothesis of linear increase of visualization strength with noise only holds true in a limited range of noise. The percentage of right escape route decisions is higher than the wrong escape route decision. The curves presented in Fig. <ref> are monotonic but saturating. Our numerical results in this subsection show that the noise pushes the integrated signal over the threshold. Additionally, we observe that the less leaky the integration and the lower the threshold, the more motion decisions we get. We have shown the DDM for the escape route visualization model. As we have mentioned in the Introduction and the model description sections, the DDM is also in connection with the Bayesian models. In order to analyze further how the brain extracts information from noisy input as typically presented in perceptual decision-making tasks. In what follows, we will consider another representative example, the decision-making process for the escape route decision phenomenon by using Bayesian inference approach. §.§ Bayesian inference modelling spiking neurons for decision-making processes As we have mentioned in the previous sections, collective decision-making can be described as the brain with a collection of neurons that, through numerous interactions, lead to rational decisions. The most commonly used tool to describe the stimulus selectivity of sensory neurons is the generalized linear models (GLMs) <cit.>. Let us now recall the following class of GLMs, namely, the logistic regression model for predicting decision-making from spike counts. First, we introduce the fundamental input/output equation of logistic regression <cit.>, which reads ŷ≡ P(y=1|x,θ) = σ(θ^T x) = σ(z(θ_i,x_i)), where ŷ denotes the the output of logistic regression. Here, ŷ can be considered as the probability that y = 1 given inputs x and parameters θ. Additionally, σ represents a "squashing" function called the sigmoid function or logistic function. The output of such logistic function is defined as follows: σ(z(θ_i,x_i)) = 1/1 + e^-z(θ_i,x_i), where z(θ_i,x_i) = α + θ_1x_1 + θ_2x_2 + … +θ_n x_n = ∑_i=1^n(α + θ_i x_i), for i ∈ℕ. Motivated by <cit.>, we are interested in a Bayesian treatment of the models for predicting stimulus from spike counts for our decision-making processes. Our methodology can be generalized to other classes of models that go beyond the GLM class, described above (e.g., <cit.>). In this subsection, we investigate the sensory evidence accumulation activity during human decision-making. We have built the risky decision model presented in subsection <ref>. That model predicts that accumulated sensory evidence from sensory cortex signals determines whether the human should choose the smoky shortcut. Here, using the descriptions of Bayesian inference, we will build the sensory neuron data and would like to see if that prediction holds true. The data contains N=40 neurons and M=400 trials for each of the three visualizing conditions: no smoky shortcut, slightly smoky shortcut and high-density smoky shortcut. In order to address our question, we need to design an appropriate computational data analysis pipeline. Moreover, we need to somehow extract the escape route judgements from the spike counts of our neurons. Based on that, our algorithm needs to decide: was there a right shortcut or not? This is a classical two-choice classification problem. We must transform the raw spike data into the right input for the algorithm (the process known as the spike pre-processing, e.g., <cit.>). Noise in the signal drives whether or not people perceive visualization of the smoke level. The brain may use the strongest signal at a peak level of noise to decide on choosing the shortcut, but we actually might think it is better to accumulate evidence over some period of time. We want to test this. The noise idea also means that when the signal-to-noise ratio is higher, the brain does better, which would be in the high density of smoke condition. We want to test this too. Using the description of logistic regression <cit.>, as an example, we introduce the following hypotheses focussing on specific details of our overall research question: Hypothesis 1: Accumulated sensory cortex spike rates explain visualization of smoke judgements better than average spike rates around the peak of the smoke level, and Hypothesis 2: Classification performance should be better for high-density smoke shortcuts and low-density smoke shortcuts. Mathematically, we can write our hypotheses as follows (using our above ingredients): * Hypothesis 1: E(c_accumulate) > E(c_average spike); * Hypothesis 2: E(c_high smoke density) > E(c_low smoke density), where E denotes taking the expected value (in this case the mean) of its argument: classification outcome in a given trial type. In what follows, we use the logistic regression as a case in point to predict stimulus from spike counts for the proposed escape route decision problem. §.§ Numerical results with the Bayesian approach for a decision-making model In this subsection, we use a Bayesian inference approach to model decision-making in the case of 2 hypotheses provided in the previous subsection <ref> for our choosing the right escape route scenario. The numerical results reported in this subsection are obtained by using a logistic regression model, as an example, implemented in Python. In particular, we use the open-source framework provided by Neuromatch academy computational neuroscience (https://compneuro.neuromatch.io/). We are also using the method of logistic regression to predict stimulus from spike counts. As mentioned earlier, other models that go beyond the GLM class can also be used in this context. The main numerical results of our analysis in this Bayesian approach are shown in Figs. <ref>-<ref>, where we have plotted the average spike counts. The average test accuracy profile and the comparison of the accuracy between the right shortcut and right shortcut judgements. In Fig. <ref>, we have plotted the averaged spike counts. Blue represents the shortcut with a high density of smoke condition and produces flat average spike counts across the 3s time interval. The orange and green lines show a bell-shaped curve corresponding to the smoke level profile. There are fluctuations in the averaged spike counts. It is clear that there is noise in our consideration. In order to see the effects of noise on the data accuracy, we look at the following results in Fig. <ref> on the average test accuracy profile (see, e.g., <cit.>). In Fig. <ref>, we have plotted the average test accuracy profile obtained by using classifier accuracy of the logistic regression. Prediction accuracy ranges from 91% to 99%, with the average at 95%, and the orange line is the median at 97%. We observe that our prediction has a high accuracy even though the given data includes noise factors. It could be better to split the average accuracy according to the conditions where we could visualize the escape route but of different magnitudes. Then, it should work better to classify higher smoke density from no smoke as compared to classifying the lower smoke density. The spike activity also works better if we ignore some of the noise at the beginning and end of each trial by focusing on the spikes around the maximum smoke density, using our window option. Additionally, we see that the average spike count plot above seems to best discriminate between the three levels of the smoke density conditions around the peak at time zero. Looking at Fig. <ref>, using the logistic regression, it is clear that the results for small window data of 100 ms and the full data set are totally different. In particular, the accuracy between the true escape route and wrong escape route judgements in the case of a small window of data is smaller than in the case of a full data set. This is due to the fact that the brain also integrates signals across longer time frames for perception. On the other hand, in the predictions based on accumulated spike counts, the high density of smoke are harder to separate from no smoke than the low density of smoke. This is clearer when predicting real choice than when predicting escape route judgements. Moreover, it is also clear that the real accumulated spike counts approximate the judgements accumulated spike counts for small window data and for the case of a full data set. We observe that the logistic regression works quite well in the case of solo small window data and the case of a solo full data set. Notice that right escape route judgments display higher decoding accuracy than the wrong choice. If right escape route judgements and our logistic regression use input from the same noisy sensors, it can be expected that they would both give similar output. This aligns with the notion that escape route judgements can be wrong because the underlying sensory signals are noisy. Of course, this only works if we record activity from neuronal populations contributing to escape route judgements. On the other hand, we would also see this result if the sensory signal was not noisy and we recorded from one of several populations that contribute to escape route judgements in a noisy way. We have shown representative examples of DDM and Bayesian inference models for human decision-making. The DDM approach has described the decision-making of individuals, while the Bayesian inference, based on logistic regression or other models, could describe collective decision-making via the brains with a collection of neuron activities. Both approaches lead to human decisions in different views. However, we know that humans not only interact individually but also interact with human groups. The decision-making problems in this situation bring a lot of challenging questions to scientists due to the complexity of human behaviour. Taking this inspiration, in what follows, we will provide a review of the recent developments in collective human decision-making and its application in brain network models. Our numerical simulations can be compared with the numerical results reported in <cit.>. Using an immersive virtual reality (VR)-based controlled experiment, the authors in <cit.> have studied the effect of smoke level, individual risk preference, and neighbour behaviour on individual risky decisions to take a smoky shortcut for evacuations. The study in <cit.> aimed to conduct a controlled experiment to verify the influence of the smoke, individual risk preference, and neighbour behaviour on individual risky decisions, i.e., whether to evacuate through the smoke. The experiment manipulated the density of the virtual smoke in the immersive virtual environments to investigate the effect of smoke level on participants’ route selection (see, e.g., in <cit.> and related references therein). Their numerical results showed that a higher smoke density lowered the use rate of a smoky shortcut during evacuations when participants needed to choose between the risky shortcut and another available route without the smoke. In particular, 89.05 % of participants evacuated through the shortcut, but the percentage reduced to 55.24% in the slight smoke scenarios, with 25.24% in the cases of dense smoke. In <cit.>, the authors considered the influence of smoke, individual risk preference, and neighbour behaviour on individual risky decisions. However, our model investigates the simpler case of only smoke effects from the computational neuroscience perspective. We also found that the density level of the smoke is a critical factor in determining whether people will take a risk. When the smoke density increases, humans tend to be willing not to choose a smoky route. Next, we will devote our efforts to deeper exploring intrinsic links between collective decision-making and the complex operation of brain networks. § COLLECTIVE DECISION-MAKING AND BRAIN NETWORKS One of the most important topics in human decision-making studies is collective decision-making. This topic has attracted the interest of a large number of scientists from different fields, including mathematicians, engineers, psychologists and neuroscientists. Collective decision-making has been explained as a fundamental cognitive process required for group coordination <cit.>. In order words, this process can be considered as the subfield of collective behaviour concerned with how groups reach decisions <cit.>. The group decision-making processes can account for also the unavoidable variation in individual decision accuracy. Decision theory has been applied successfully to human groups by showing how to optimally weight individuals’ contributions to the group decisions according to their accuracy. The authors in <cit.> have provided empirical evidence for human sensitivity to task-irrelevant absolute values indicating a hardwired mechanism that precedes executive control. On the other hand, the collective decision-making processes and social learning processes, including opinion dynamics, are closely connected in social science. In particular, many researchers investigate collective decisions in humans, deepening our understanding of the dynamics of economies and social policies <cit.>. A multi-agent system is also a collective of autonomous agents interacting in a shared environment <cit.>. The multi-agent systems play an important role in a variety of application domains, such as traffic, human decision-making, control, and complex systems <cit.>. They appear to be indispensable tools for studying bio-social interactions and play an important role during the recent pandemic (e.g. <cit.>). An agent-based model to explain the emergence of collective opinions not based on feedback between different opinions, but based on emotional interactions between agents has been proposed in <cit.>. Collective decision-making is described not only as individuals in a group either reaching a consensus on one of several available options or distributing their workforce over different tasks but also as the brains with a collection of neurons that, through numerous interactions, lead to rational decisions <cit.>. In <cit.>, the authors have evaluated recent progress in understanding how these basic elements of decision formation, including deliberation and commitment, are implemented in the brain. In particular, the decisions are characterized by many sensory-motor tasks that can be thought of as a form of statistical inference. Additionally, we know that many aspects of human perception are best explained by adopting a modelling approach in which experimental subjects are assumed to possess a full generative probabilistic model of the task they are faced with and that they use this model to make inferences about their environment and act optimally given the information available to them <cit.>. Hence, these decision-making systems normally include noise. A number of researchers in <cit.> have addressed the question, "What is the (unknown) state of the world, given the noisy data provided by the sensory systems?". The elements of such a decision-making process are described in terms of probability theory, e.g. Bayesian methods <cit.>. Recently, the Bayesian approach to perception and action has been used in modelling human decision-making. This approach has attracted the interest of many researchers from different fields and has successfully accounted for many experimental findings <cit.>. Unlike the individual decision-making proposed in the previous sections, such Bayesian inference can be also used to model collective decision-making. One of the challenging questions the brain research accounted for regarding human social decision-making is when decisions are made in a social context, the degree of uncertainty about the possible outcomes of our choices depends upon the decision of others. A model-based account of the neurocomputational mechanisms guiding human strategic decisions during collective decisions has been discussed in <cit.>. An influential review of brain theories in the biological (for example, neural Darwinism) and physical (for example, information theory and optimal control theory) sciences from the free-energy perspective has been given in <cit.>. Using the free-energy principle and active inference approach, two free-energy functionals for active inference in the framework of Markov decision processes have been compared in <cit.>. In developing an optimal Bayesian framework based on partially observable Markov decision processes, the authors in <cit.> have shown that humans simulate the “mind of the group” by modelling an average group member’s mind when making their current choices, in the group decision-making. A brain network supporting social influences in human decision-making has been discussed in <cit.>. Such social influences can often lead to interactive decision-making under partially available information <cit.>. One of the important classes of collective decision-making is the self‑driven collective dynamics with graph networks <cit.>. This collective dynamics plays an important role in self-organization for decision-making processes <cit.>. A central concept connecting the microscopic and macroscopic levels of neurons is criticality in brain studies <cit.>. Moreover, the main elements of the criticality hypothesis are the evolutionary arguments and a plausible general mechanism that can explain the self-organization to the critical state <cit.>. A review of the experimental findings of isolated, layered cortex preparations to self-organize toward four dynamical motifs presently identified in the intact cortex in vivo: up-states, oscillations, neuronal avalanches and coherence potentials have been provided in <cit.>. There are many ways to highlight different features of human decision-making dynamic modelling and a rich set of associated mathematical problems — one of them we present in Fig. <ref>. In this figure, we highlight the importance of the triad: neuroscientific foundations, mathematical modelling, and analysis. In particular, starting with neuroscientific considerations, we use mathematical modelling to build the models for this complex process. Then, we use mathematical analysis to theoretically prove the well-posedness and show the properties of the associated models. In bridging the gap between the different components of the above triad and addressing related problems, some progress has been achieved (e.g., <cit.> and references therein), with many open issues remaining. Note also that collective decision-making is not limited to the human behavioural system. It is ubiquitous across the living and artificial collectives <cit.>. Collective dynamics also include collective emotions. Many models have been proposed to capture collective emotions <cit.>. Individual and group-based emotions are individual-level or micro-level phenomena. In contrast, collective emotions are defined as macro-level phenomena that emerge from emotional dynamics among individuals responding to the same situation <cit.>. In <cit.>, the authors have discussed collective emotions in the larger context of collective-level psychological phenomena, defined collective emotions and discussed their key components, and then showed how collective emotions emerge from individual-level emotional interactions. On the other hand, there is another direction in capturing the collective emotion dynamics. For instance, the collective emotional dynamics during the COVID-19 outbreak have been considered in <cit.>. Recent results on learning dynamics with graph networks are provided in <cit.>, where the authors have shown another interesting direction in capturing the learning of self‑driven collective dynamics with graph networks. The proposed approach could potentially be useful in modelling collective decision-making by using learning dynamics with graph networks. Understanding the character of the dynamics of sensory decision-making behaviour offers many challenging questions to scientists due to the fact that decisions may depend on a large number of task covariates, including the sensory stimuli, an agent’s choice bias, past stimuli, past choices, past rewards, etc. In what follows, in order to understand better collective human decision-making, we provide numerical examples of collective dynamics in the approach based on brain networks considered as collections of neurons as well as in the group dynamics. § EXAMPLES OF COLLECTIVE DYNAMICS IN THE APPROACH BASED ON BRAIN NETWORKS CONSIDERED AS COLLECTIONS OF NEURONS A large part of brain regions is critically involved in solving the problem of action selection or decision making, e.g. the cortex and the basal ganglia <cit.>. Furthermore, neuromodulation systems also participate in a variety of cognitive processes, such as motivation, mood, and learning <cit.>. Take dopamine as an example of neuromodulation. Dopamine's role is one of the most important factors in reward processing and motivation in decision-making. Many researchers from different fields have investigated strong evidence of the role of dopamine in learning the value of actions, stimuli and states of the environment <cit.>. In the basal ganglia (BG), in particular to the striatum, but also to the frontal cortex, the substantia nigra is also a crucial source of dopamine <cit.>. A series of mechanisms that reinforce and favor stimuli and actions are implemented by the dopaminergic system <cit.>. Understanding what drives changes in decision-making behavior is in the domain of reinforcement learning (RL) <cit.>. RL is a framework for defining and solving a problem where an agent learns to take actions that maximize reward <cit.>. The main feature of RL is that it explicitly considers the whole problem of a goal-directed agent interacting with an uncertain environment. The main elements of RL include an agent, biological or artificial, that observes the current state of the world and selects an action based on that state. In particular, while taking action, the agent receives a reward and then uses this information to improve its future actions. The action sequences of human decision-making usually involve many cognitive processes such as beliefs, desires, intentions, and theory of mind, i.e., what others are thinking. Due to the complexity of human behaviours, artifical intelligence (AI) as a powerful tool that predicts human behaviours to be treated agnostically to the underlying psychological mechanisms <cit.>. The developments of AI algorithms achieve the higher-level brain-inspired functionality studies <cit.>. One of the most important classes of RL in the field of AI is the reward prediction error. Based on the recent work in <cit.>, we recall a model of dopamine-based reinforcement learning inspired by recent artificial intelligence research on distributional reinforcement learning. Temporal difference (TD) is a class of model-free RL methods which learn by bootstrapping from the current estimate of the value function <cit.>. Unlike classical TD learning, we introduce briefly the distributional TD method. For the observed x, let f: ℝ to ℝ be a response function and a set of value predictions V_i(x). We also have the set of values updated with learning rates α_i^+, α_i^-∈ℝ^+ to obtain the state x' from the given state x. This process results in reward signal r and the time discount γ∈ [0,1). The distributional TD errors are computed as follows: δ_i = r + γ V_j(x') - V_i(x), where V_j(x') represents a sample from the distribution V(x'). Then, the distributional model TD updates the baselines with the following fomula: V_i(x) ⟵ V_i(x) + α_i^+ f(δ_i) for δ_i > 0, V_i(x) ⟵ V_i(x) + α_i^- f(δ_i) for δ_i ≤ 0. The main numerical results have been obtained here using the open-source framework provided by <cit.> to simulate different properties of dopamine neurons with reward prediction error (RPE) theory. In particular, we apply the tabular simulations of the classical TD and distributional TD using a population of learning rates selected uniformly at random for each cell to obtain Fig. <ref>. When we consider the difference between classical TD and distributional TD, we use a separately varying learning rate for negative prediction errors. The method use a linear response function. Moreover, the simulations for classical TD focus on immediate rewards, while the simulations for distributional TD learn distributions over multi-step returns. In Fig. <ref>, we plot the different dopamine neurons that consistently reverse from positive to negative responses at different reward magnitudes. In the first panel, we plot the RPEs produced by classical TD simulations; we see that all cells carry nearly the same RPE signal. The presence of Gaussian noise causes slight differences between cells. Note that in contrast to classical TD learning, distributional TD consists of a diverse set of RPE channels, each of which carries a different value prediction, with varying degrees of optimism across channels <cit.>. In the second panel, in the distributional TD case, cells have reliably different degrees of optimism. Unlike the classical TD, there are fluctuations in the system; we observe that some cells use different RPE signals. However, we add Gaussian white noise to the distributional system, and the cells have their behaviours quite similar to the case of classical TD. Here, we observe that all cells carry nearly the same RPE signal. Using the assumption with the brain represents possible future rewards not as a single mean but as a probability distribution. All dopamine neurons should transmit essentially the same RPE signal (see more details in, e.g. in <cit.>). We have provided an example based on the results reported in <cit.>, where we have added the Gaussian white noise in the distributional TD case. We also observed that all cells carry nearly the same RPE signal. The presence of noise in the system could bring also benefits <cit.> since all dopamine neurons should transmit essentially the same RPE signal. The results provided in <cit.> have contributed to the development of the theory of dopamine. The study of dopamine provides explanations on a unifying framework for understanding the representation of reward and value in the brain. This direction of research would potentially be useful in the study of collective decision-making via the brains with a collection of interacting neuron systems provided in Sections <ref>-<ref>. Additionally, a better understanding of uncertainty factors in dopamine-based RL could contribute to further developments in the fields of learning dynamics in human decision-making studies, brain disorders and other applications. We know that human decision-making is affected directly by learning and social observational learning. The models of interaction of direct learning and social learning at behavioral, computational, and neural levels have attracted the interest of researchers from different fields. The authors in <cit.> have provided a brain network model for supporting social influences in human decision-making. In particular, they have used a multiplayer reward learning paradigm experiment in real time. The numerical experiment shows that individuals succumbed to the group when confronted with dissenting information, but observing confirming information increased their confidence. Furthermore, the results could potentially contribute to the development of the study of learning dynamics in neural networks or social network problems. After appreciating the learning and social influence in decision-making, we turn to the social decision-making dynamics in groups. Along with studying the decision-making of each individual as highlighted in Sections <ref>-<ref>, the decision-making in groups of individuals is also important <cit.>. Let us recall results reported in <cit.>, where the authors have considered the decision process and information flow with a DDM extended to the social domain. This DDM can be considered an extended version of the DDM provided in Sections <ref>-<ref>. In particular, the implementation of the social DDM proposed in <cit.> can be described as follows: Each individual first accumulates their personal information about the state of the world. Then during the social phase, the individuals can account for additional social information, e.g. they can incorporate the choices of others. After an individual has sufficient evidence (i.e., the decision boundary is exceeded), the decision is made. Applying the adaptive behavioural parameters and using evolutionary algorithms (see, e.g., <cit.>), we can examine how individuals should strategically adjust decision–making traits to the environment. A typical procedure for the analysis is as follows. First, we consider groups of individuals whose interests were completely aligned, with individuals equally sharing a group payoff ("cooperative groups"). Then, we investigate whether the collective interest was at odds with individuals’ self-interest. We examine how to introduce individual-level competition (i.e., a payoff solely based on own performance) shaped evolved behaviours and corresponding payoffs across group sizes and error cost asymmetries. To understand better the collective dynamics of such DDM, we look at the following recently reported numerical results presented in Fig. <ref> (see <cit.> for further details). Based on Fig. <ref>, we can analyse the evolutionary outcomes of cooperative and competitive groups at an error cost ratio of 4, following the results first presented in <cit.>. In panel (A), we see that competitive groups developed a stronger start point bias towards the signal boundary than cooperative groups. In panel (B), the competitive groups evolved higher boundary separations than the cooperative groups. On the other hand, we can also observe that both cooperative and competitive groups evolved to the maximum level of social drift strength in panel (C). This observation confirms the strong benefits of using social information or even copying the first responder, independent of group size or cooperative setting in the decision-making system. Additionally, in panels (D)-(E), we see that individuals in competitive groups made slower choices and achieved a lower payoff than individuals in cooperative groups. On the other hand, at larger group sizes, the cooperative and competitive groups achieved a higher payoff. Moreover, cooperative groups have benefited much more from larger groups in panel (E). This is due to the fact that the larger start point bias and boundary separation that evolved in competitive groups partly undermined the benefits of collective decision-making (see more details, e.g., in <cit.>). The new and forthcoming results in this field, such as those reported in <cit.>, could further contribute to understanding a wide range of social dynamics applications from crowd panics, medical emergencies and epidemics to critical law enforcement situations and smart city system designs. As we have to deal with many challenges of uncertainty in collective human decision-making problems, some approaches have inevitably been left outside this chapter's scope. Among them are those based on intelligent and fuzzy systems <cit.>. Additional insight into decision-making processes can also be obtained from various proxy systems of collective human behaviours, such as online social network systems <cit.>, as well as through systematic studies of biosocial dynamics where multiscale approaches accounting for small-scale effects become essential <cit.>. We will also mention the approaches based on inverse problem ideas and data-driven models, inverse Bayesian inference, inverse RL, and nonlinear programming <cit.>. Finally, the relevance of AI methodologies in our context has already been mentioned in Section <ref>. Such methodologies can facilitate collective human decision-making and be invaluable in critical situations such as emergencies and rescue operations <cit.>. § REMARKS ON HUMAN BIOSOCIAL DYNAMICS WITH COMPLEX PSYCHOLOGICAL BEHAVIOUR AND NONEQUILIBRIUM PHENOMENA What we discussed in the previous sections regarding biosocial human collective decision-making, linking neuroscience, mathematical modelling, and analysis (see Section 4), provides a good starting point for a deeper investigation of the role of nonequilibrium phenomena in biosocial and behavioural psychological dynamics and approaches. It is essential to realize that knowledge creation (see, e.g., <cit.> and references therein) and associated decision-making steps are nonequilibrium processes. Further, a critical element for decision-making is working memory, the brain's ability to store and recall information temporarily. In its turn, given that working memory is affected early during the onset of neurodegenerative diseases such as Alzheimer's, it can serve as a key to better understanding the course of such diseases and developing treatments. The next important remark is pertinent to the multiscale character of decision-making that we briefly mentioned in Section 5. We have in this research domain all the typical scales that we normally consider when dealing with nonequilibrium processes: * Microscopic scales: strong energy fluctuations and complex information processing which, in their turn, induce a broad range of interactions, timescales, biochemical reactions, products and complexes, far from equilibrium considerations; * Kinetic / Mesoscopic scales: here nonequilibrium processes may depart remarkably from standard scenarios of Gibbs ensemble averages and Boltzmann models, and interfacial transport and mixing provide prominent examples of nonequilibrium processes coupling kinetics to meso- and macroscopic scales; * Macroscopic scales: with (multi) fields and (multiphase) flows considerations, we have interfacial mixing and nonequilibrium dynamics which can be (a) nonlocal (may include contributions from all the scales and may sense initial and boundary conditions <cit.>), (b) inhomogeneous (e.g., flow fields may not only be non-uniform, but they may involve fronts), (c) anisotropic (their dynamics depend on the directions), (d) statistically unsteady (with mean values of the quantities varying with time and with time-dependent fluctuations around these means), (e) and their invariance, correlations, and spectra may differ substantially from those of conventional fluid dynamics considerations, including turbulence, making corresponding CFD models frequently questionable in this field. However, with the above considerations, serious challenges quickly become apparent at the levels of theory, experiment, and fundamentals. While some have common features with other nonequilibrium analyses, others also have their specifics for this research domain. At the level of theory, in the general setting, we have to deal with the multiscale, multiphase, nonlinear, nonlocal, and statistically unsteady character of the dynamics. At the level of the experiment, interfacial mixing and nonequilibrium processes are a challenge to implement and study systematically in a well-controlled environment. Moreover, a systematic interpretation of these processes from the data is an additional challenge since the processes are statistically unsteady and may impose the influence of an observer on observational results. At the fundamental level, interfacial mixing and nonequilibrium dynamics require a unified description of particles and fields across the scales based on a synergy of theory, simulations, and experiments. Successes in these areas open new opportunities for studying the fundamentals of interfaces and mixing and their nonequilibrium dynamics. Statistical mechanics considerations should not neglect its thermodynamic origin and the system's interaction with its environment. Hence, additional challenges come from the fact that in some environments, nonequilibrium dynamics of interfaces and (interfacial) mixing are expected to be enhanced. In contrast, in some others, such dynamics should be mitigated and/or tightly controlled. When dealing with decision-making processes, including psychological behaviour into account, the drift-diffusion models play a vital role in the hierarchy of mathematical models, providing an initial stepping stone for moving to nonequilibrium phenomena. Recall, for example, that higher-level models such as the n-dimensional Fokker-Planck (FP) equation describe a drift-diffusion interplay with time-dependent drift (vector) and diffusion (matrix) coefficients for the probability density: ∂ P/∂ t = - ∑_μ=1^n∂/∂ x_μ (v_μ P) + ∑_μ, ν=1^n∂/∂ x_μ ( D_μν∂ P/∂ x_ν). It is well-known that one of many possible derivations of such models starts with a Markov process, with its continuous state described by a hierarchy of equations that collapses effectively just to the Chapman-Kolmogorov equation. Alternatively, we can start with the master equation - a Markov chain, where both the states and the time are discrete, considered in the limit where time is continuous, and then do a coarse-graining to get the FP equation via certain transformations such as the Kramers-Moyal expansion with its second-order truncation. It is also known that for a free energy functional such equations are equivalent to Markov processes defined on graphs (e.g., <cit.>) and can be well-suited for brain network models, a connection we pursued in the previous two sections. Other motivational papers for our current consideration can also be mentioned (e.g., <cit.>). In analyzing social human collective decision-making, the DDM model is a prime tool in accounting for psychological features of biosocial dynamics. It is a model of sequential sampling that has been widely used in psychology and neuroscience to explain the observed patterns of choice and response times in a range of binary-choice decision problems (e.g., <cit.>). We described the main theoretical setting for the DDM in Section 2, and it is worthwhile also noting that it can be useful to carry out the Sequential Probability Ratio Test (SPRT) in that context, given that the SPRT plays a prominent role in psychological research (e.g., <cit.>). The simplest example could be to assume that the probability of seeing a measurement given the state is a Gaussian (normal) distribution where the mean (μ) is different for the two states (p(m_t| s = ± 1)) but the standard deviation (σ) is the same. Then, by carrying out a series of measurements, we would like to figure out what the state is, given our measurements. To do this, we can compare the total evidence up to time t (the final time of measurements) for our two hypotheses (that the state is +1 or that the state is -1). We can do this by computing a likelihood ratio, that is the ratio of the likelihood of all these measurements given the state is +1, p(m_1:t|s = +1 ) to the likelihood of the measurements given the state is -1, p(m_1:t|s = -1 ). This likelihood ratio test is typically quantified by taking the log of this likelihood ratio logp(m_t| s = +1)/p(m_t| s = -1). Applying this framework, the influence of noise on decision-making can be analyzed. Using the above-mentioned example and plotting the trajectories under the fixed stopping-time rule for different variances, one can easily conclude that a higher noise would lead to the evidence accumulation varying up and down more, translating this into the fact that humans are more likely to make a wrong decision with high noise. In this case, even if the actual distribution corresponds to s = +1, the accumulated log-likelihood ratio is more likely to be negative at the end. The situation is different when the variance is small because each new measurement will be very similar. Finally, in applications of this model to human decision-making, we also see that humans are more likely to be wrong with a small number of time steps before the decision, as there is more chance that the noise will affect the decision. To move beyond such simple examples, we note that stochastic adaptation dynamics can be studied by using Hidden Markov Models (HMMs). Developing this idea into our context, we note that psychological behaviour can be studied through the associated adaptation dynamics of humans based on their sensory systems. To do that, in the modelling framework, the starting point can be taken as a Markov process where the joint distribution of observations and hidden states is modelled (that is, both the prior distribution of hidden states, the transition probabilities, and the conditional distribution of observations given states, the emission probabilities). So, we come to a HMM, which is a generative type of models (because it is aimed at the joint probability distribution on a given observable variable and target variable). Then, the idea (see, e.g., <cit.>) is to explore the connection between the adaptation dynamics, where the adaptation variable is driven out of equilibrium by an external stimulus, and the dissipation dynamics (measured by the rate of entropy production quantified as the relative entropy between forward and backward trajectories of the dynamics). Suppose we again assume that our state in the decision-making process can be either +1 or -1. In that case, the dynamics can be easily defined by the 2 x 2 matrix, and the equation for the evolution of the probability of the current state (represented by a two-dimensional vector) can easily be obtained. Note that in this case, the elements of the matrix are defined by the probability of switching to state s_t=j (j = ± 1) from the previous state s_t-1 = i (i = ± 1), that is by the conditional distribution p(s_t = j|s_t-1 = i). This HMM setting is inspired by neural brain systems. Indeed, we know that neural systems switch between discrete states, observable only indirectly, through their impact on neural activity. HMMs let us reason about these unobserved (hidden or latent) states using a time series of measurements. We are looking at the likelihood, which is the probability of our measurement (m), given the hidden state (s): P(m|s). The immediate goal then would be to learn how changing the HMM’s transition probability and measurement noise could impact the data and how uncertainty increases as we predict the future (and, ultimately, how to gain information from the measurements). In applying these ideas to the decision-making under uncertainties, our binary variable s_t ∈{-1,1} become latent that switches randomly between the two states. In the simplest setting, one can use a 1D Gaussian emission model m_t|s_t ∼𝒩(μ_s_t, σ_s_t^2) that provides evidence about the current state (see, e.g., <cit.>). Unlike the cases discussed earlier, now, in a HMM, we cannot directly observe the latent states s_t, because we get noisy measurements m_t characterized by p(m|s_t). To move forward, we assume that humans have normal vision, colour recognition, auditory sense, and movement abilities. What is important to emphasize is that often humans have the wrong percept. In particular, they think their own route might be the best choice in escaping the emergency situations when the other neighbours might have other better choices of escape route; or vice versa. The illusion in such situations is usually resolved once you gain a vision of the surroundings that lets you disambiguate the routes. However, the problem here has much deeper roots and the interested reader is encouraged to look at it from the thermodynamics point of view and decision-making in the arrow of time (e.g., <cit.>). In our example, we use an HMM to model decision-making in the case of two alternative choices for choosing an escape route scenario. In particular, the state +1 represents the human choosing the nearest visible escape route, while state -1 stands for choosing another route. While the 2 x 2 transition matrix in our HMM model can easily be constructed, simulations based on these considerations lead to some important conclusions regarding switching probabilistic state limits. This can be analyzed by plotting “forgetting” curves, the probability of switching states as a function of time. As the probability of switching increases (under the noise modelled by the Gaussian emission model that provides evidence about the current state), we eventually start observing oscillations, indicating that one forgets more quickly with high switching probability because the person becomes less certain that the state is the known one. The above considerations led us to believe that the adaptation should be cast as an inference problem for nonequilibrium dynamics so that we can use either Bayesian or Kalman filters for its solution. While we have emphasized the importance of inference problems throughout the previous sections, in the nonequilibrium context we refer the interested reader to (e.g., <cit.>) for further details. In brief, the resulting filtered adaptation dynamics couples the human (sensory) response function to the state of the environment in a noisy manner, allowing for their study in terms of stochastic (thermo)dynamics of nonequilibrium systems. It occurs due to the delay between changes in the stimulus's statistics and the adaptation mechanism's response, leading to irreversible dynamics. The latent state s_t (more precisely, the location of the state s at time t) will evolve as a stochastic system in discrete time, with the evolution matrix satisfying certain conditions connected with nonequilibrium work relations (e.g., <cit.>). For the reasons mentioned below, we use Kalman filtering in this case. A Kalman filter recursively estimates a posterior probability distribution over time using a mathematical model of the process and incoming measurements. This dynamic posterior allows us to improve our guess about the state's position; besides, its mean is the best estimate one can compute of the state’s actual position at each time step (see, e.g., <cit.>). There are several open-source codes with algorithms for HMMs with Kalman filtering, among which we shall mention the open-source framework for Biological Neuron Models of the Neuromatch Academy. Using the associated algorithms, we have analyzed and tracked how humans choose an escape route in emergency situations described in the previous section. In forward inference with HMM without filtering, we observe many outliers for the measurements compared with the true state (the measurements are considered as position over time according to the sensor). These are hidden states far away from the states +1 and -1, and such examples demonstrate that our estimations, in this case, are not really helpful in tracking how humans choose the nearest visible escape route or choose another route until they make a decision (for the simulations run to the time corresponding 100 s). In all these experiments, we assumed that humans will make a decision to choose the nearest visible escape route if the final value of the state is positive; otherwise, humans will choose another route with negative values. A very different situation has been obtained with the Kalman filter inference where in the case of non-equilibrium dynamics (modelled with continuous time Markov Chains), we saw that our estimations perfectly match the true state, showing that the HMM with Kalman filtering provides a viable tool to track how humans make their decisions. Our previous two sections studied the link between collective decision-making and brain networks. To bring this study to the next level, one should recall that brain dynamics are known to be nonequilibrium (e.g., <cit.>). Until very recently, neural systems in general and brain network models, in particular, have been predominantly studied based on two main approaches: (a) biologically-inspired approaches (connectome-type of models), and (b) Bayesian approaches. While the latter approaches were also a part of our consideration here, it is worthwhile mentioning they were challenged empirically with experimental facts in decision-making, and the interested reader can find examples in the literature of the breakdown of the classical framework of cognitive science based on such approaches. While studying brain network models, we should resort to nonequilibrium information dynamics in the general setting. Indeed, it is well-known that the conditions for learning do not happen in an equilibrium state. In this case, whether physical or biological systems, they do have information systems features that can be studied via (a) information storage (memory), (b) information transfer (signalling), and (c) information modification (computation). Hence, for the study of such information systems in the regime of nonequilibrium states, one cannot avoid challenges involving minimizing relative entropy (e.g., the Kullback-Leibler divergence) and looking at phase transitions near high mutual information with respect to different distributions. On a side note, this route should also be fundamental in clarifying Artificial Intelligence (AI) concepts when looking at nonequilibrium. Our motivation in the nonequilibrium analysis of brain networks and associated decision-making is rooted in the analysis of neurodegenerative diseases (e.g., <cit.>). Starting from earlier papers (e.g., <cit.>), we know, for example, that Alzheimer's and other kinds of dementia affect working memory at an early stage. Hence, as a critical element in decision-making, working memory is also crucial for better understanding neurodegenerative diseases such as Alzheimer’s disease and frontotemporal dementia (e.g., <cit.>). For advanced approaches to model working memory, one can follow the premises where the FP equation is derived based on the Langevin equation, taking the latter as a starting point with a biophysical circuit model for working memory (e.g., <cit.>) and apply one of the available methodologies for its solution such as the nonequilibrium landscape-flux method. Our next remark concerns nonequilibrium dynamics, biosocial self-organization, and developmental psychology. Given that we are considering systems whose constituent elements consume energy, they are, by definition, out-of-equilibrium. They are active systems, and as a part of complex systems, their particularly attractive aspect is the emergence of cooperative phenomena, or self-organization, often driven by nonequilibrium dynamics that relies on an external (energy) source. As we all know, paradigmatic examples here include flocking, collections of cells, etc, where the tools based on field theory, entropy production to measure to which degree the equilibrium is broken, and reaction-diffusion models can be applied. We have to deal with nonequilibrium processes coupling kinetics to meso- and macroscopic scales, including interfacial transport and mixing. The importance of microscopic scales further complicates the picture. This is coming not only from human biosocial dynamics, but also from their complex psychological behaviour, and they are closely interconnected. Despite that, these complex processes may still lead to self-organization and may thus expand opportunities for diagnostics and control of nonequilibrium dynamics. It should be emphasized that the attempts to incorporate psychology with self-organization when dealing with biosocial dynamics of humans have been around at least since I. Prigogine got his Nobel Prize in 1977 (see, e.g., <cit.>, as well as later works in the context of human decision-making, e.g. <cit.>, and others). Some of these and more recent works included renormalization group approaches for analyzing critical phenomena in complex systems with nonequilibrium features (e.g., <cit.>), as well as new ideas on evolving cycles and self-organized criticality in biosocial dynamics (see, e.g., <cit.>, and references therein). Our final remark goes to nonequilibrium dynamics and AI methodologies. Considering AI tools and Machine Learning (ML) methods have emerged as an important direction to study problems in statistical mechanics, nonequilibrium phenomena pertinent to psychological behaviours and the biosocial dynamics of humans should not be an exception. Moreover, it is critical to do so. Indeed, we know that while the microscopic dynamics of physical systems are time reversible, the macroscopic world does not necessarily share this symmetry (e.g., <cit.>). Further, as it was pointed out, fluctuations at small microscopic scales lead to an effective “blurring” of time’s arrow, and attempts have been made to quantify our ability to “perceive” its direction in a system-independent manner. In doing so, thermodynamic relationships such as the Clausius inequality can only be expressed in terms of averages, and such problems as the direction of time's arrow can be quantified as a problem in statistical inference in a way similar we discussed above. Further, we have already pointed out that since the general form of the nonequilibrium steady-state is not known (unlike the equilibrium case with the Boltzmann distribution), generative models are an ideal candidate to model and learn these (nonequilibrium) distributions, with unsupervised learning techniques (whereas such nonequilibrium distributions as Kappa (e.g., <cit.>) may serve as a testing ground for that). An example of such generative models based on HMMs has been discussed above. Of course, while these ideas are critical in the areas we have discussed here, they can also be applied to such problems as the estimation of free energy differences, as well as to the identification of physical quantities that distinguish different regimes of dynamics in out-of-equilibrium phenomena, but these considerations go beyond the scope of this chapter. Among future directions of the nonequilibrium considerations we considered in this section, we would like to mention (a) exploring a theoretical basis of the connection between the dynamics of quantum decision-making and the free-energy principle in cognitive science (e.g., <cit.>), (b) refining mathematically concepts of nonequilibrium psychology (e.g., when a group of people agrees on something, and what they agreed upon is a consensus, and a consensus is by definition an equilibrium), and (c) expanding the area of applications where such concepts become decisive, including applications in nonequilibrium social science and policy; it requires better estimates of policy consequences at the microlevel and advances in behavioural sciences revealing how people/firms/governments do behave in practice (e.g., <cit.>), and ultimately better predicting their behaviours. § CONCLUSIONS We have proposed and described probabilistic drift-diffusion models and Bayesian inference frameworks for human social decision-making. In particular, we have provided details of the models and representative numerical examples. We also discussed the decision-making process in choosing an escape route scenario by considering the drift-diffusion models and Bayesian inference frameworks. Our numerical results have demonstrated that the right shortcut displays higher decoding accuracy than the wrong shortcut. Moreover, the average test accuracy is increased to 96.62% even though the given data includes noise. The accuracy between the right shortcut and bad shortcut judgements in the case of a small window of data is lower than in the case of a full data set. Furthermore, we have also provided a review of recent results on collective human decision-making and highlighted key approaches and challenges in analyzing human biosocial dynamics with complex psychological behaviour in the nonequilibrium setting. We examined recent developments in collective human decision-making and its applications with brain network models. We have found that neuromodulation and reinforcement learning methods are essential in human decision-making. Furthermore, the collective decision-making process has been scrutinized for cooperative and competitive groups, subject to different parameter choices. A better understanding of such decision-making systems would contribute to further developments of social human decision-making studies, higher-level brain-inspired functionality studies and other applications. Authors are grateful to the NSERC and the CRC Program for their support. RM is also acknowledging support of the BERC 2022-2025 program and Spanish Ministry of Science, Innovation and Universities through the Agencia Estatal de Investigacion (AEI) BCAM Severo Ochoa excellence accreditation SEV-2017-0718. ieeetr
http://arxiv.org/abs/2307.00341v1
20230701133824
Effective temperatures of classical Cepheids from line-depth ratios in the H-band
[ "V. Kovtyukh", "B. Lemasle", "N. Nardetto", "G. Bono", "R. da Silva", "N. Matsunaga", "A. Yushchenko", "K. Fukue", "E. K. Grebel" ]
astro-ph.SR
[ "astro-ph.SR" ]
firstpage–lastpage Sparse-Input Neural Network using Group Concave Regularization Bin Luo [email protected] Department of Biostatistics and Bioinformatics Duke University Durham, NC 27708, USA Susan Halabi [email protected] Department of Biostatistics and Bioinformatics Duke University Durham, NC 27708, USA August 1, 2023 =================================================================================================================================================================================================================================================================================================== The technique of line depth ratios (LDR) is one of the methods to determine the effective temperature of a star. They are crucial in the spectroscopic studies of variable stars like Cepheids since no simultaneous photometry is usually available. A good number of LDR-temperature relations are already available in the optical domain, here we want to expand the number of relations available in the near-infrared in order to fully exploit the capabilities of current and upcoming near-infrared spectrographs. We used 115 simultaneous spectroscopic observations in the optical and the near-infrared for six Cepheids and optical line depth ratios to find new pairs of lines sensitive to temperature and to calibrate LDR-temperature relations in the near-infrared spectral range. We have derived 87 temperature calibrations valid in the [4 800–6 500] K range of temperatures. The typical uncertainty for a given relation is 60-70 K, and combining many of them provides a final precision within 30-50 K. We found a discrepancy between temperatures derived from optical or near-infrared LDR for pulsations phases close to ϕ≈0.0 and we discuss the possible causes for these differences. Line depth ratios in the near-infrared will allow us to spectroscopically investigate highly reddened Cepheids in the Galactic centre or in the far side of the disk. Stars: fundamental parameters – stars: late-type – stars: supergiants – stars: variables: Cepheids § INTRODUCTION The effective temperature is a fundamental parameter of stellar atmospheres. Therefore, deriving the effective temperature of a star is the most important step in the analysis of a stellar spectrum, which enables the determination of the chemical composition of the star and of its evolutionary status. Since by definition, the effective temperature is the temperature of a black body that produces the same total power per unit area as the observed star, can be derived directly by knowing the stellar luminosity and radius <cit.>, making interferometric techniques the best tool at our disposal. Unfortunately, interferometric measurements are currently limited to nearby stars that do not cover yet the entire parameter space. Alternatively, the infrared flux method <cit.> also provides and the angular radius of the star by combining its integrated flux and the infrared flux in a given band. Also, the Surface Brightness color relation in used to derive the angular diameter variation and the distance of Cepheids <cit.>. It becomes then possible to calibrate (spectro-)photometric techniques, for instance measuring the Paschen continuum (3647-8206 Å) to determine from stellar fluxes. Another (robust) method relies on -color calibrations <cit.>. Photometric techniques to derive are however sensitive to the other atmospheric parameters of the star (for instance its metallicity [Fe/H] or its surface gravity ). Moreover, it is always difficult to obtain an accurate determination of the interstellar reddening, especially for faint, distant objects in highly extincted regions, for which we have started to obtain high-resolution spectra, in particular in the near-infrared (NIR) spectral domain. Purely spectroscopic methods might then be preferred. For instance, fitting the profile of Balmer lines <cit.> provides a good diagnostic (although only below ∼8 000 K) thanks to their low sensitivity on . Metal line diagnostics enable us to determine simultaneously the atmospheric parameters , , and [Fe/H] (and microturbulent velocity for 1D-analyses), either by the means of their curves of growth <cit.> or by ensuring that abundances of various lines from the same element show no trend with their excitation potentials (to constrain ) or with their equivalent width (to constrain ). They can be applied even in the case of pulsating variable stars like classical Cepheids, see for instance <cit.>. Such techniques require however accurate determinations of the atomic parameters of the line (e.g., their oscillator strengths and damping constants) and they are sensitive to departures from the Local Thermodynamical Equilibrium (LTE). Continuous progress in our knowledge of the physics of stellar atmospheres and increased computing power now allows us to directly compare an observed spectrum with grids of synthetic <cit.> or empirical <cit.> spectra. The line depth ratios (LDR) method, which is based on the ratio of the depths of two lines having different sensitivity to <cit.> presents the advantage of being free from reddening effects and provides a high internal precision (≈10 K). In FGK stars, the depths of low-excitation lines of neutral atoms are highly responsive to , while those of high-excitation lines are relatively insensitive to <cit.>. LDR calibrations are available for dwarf and giant stars <cit.>. Combining a large number of calibrations improves the precision of the temperature determination significantly. The concept of LDR has recently been expanded to flux ratios (FR) by <cit.>, focussing on small wavelength domains rather than the core of absorption lines, and with exquisite absolute calibration. They have been adapted to the specifics of Cepheids by <cit.>. <cit.> (see also ) calibrated LDR for Cepheids in the optical domain. <cit.> have confirmed the validity of the line depth ratios approach using 2D numerical models of Cepheid-like variable stars, where non-local, time-dependent convection is included from first principles. Line depth ratios of Cepheids have paved the way for studying the distribution of metals in the Milky Way thin disk <cit.>. Cepheids in the Magellanic Clouds also allow us to investigate the distribution of metals in the young population of these galaxies <cit.>. Moreover, since the Large Magellanic Cloud is used to calibrate period-luminosity (PL) relations, LDR play a crucial role in investigating the possible metallicity dependence of PL relations <cit.>. Finally, LDR have also been applied to old (>10Gyr) type II Cepheids <cit.>, opening a new path to investigate thick disk and halo stars. Cepheids' LDR also allowed us to trace temperature variations over the pulsation cycle <cit.>, to discover peculiar Cepheids with high lithium content, presumably crossing the instability strip for the first time <cit.>, and to investigate Cepheids pulsating in two modes simultaneously <cit.>. The LDR method proved to be effective when applied to optical spectra, but it is only high-resolution IR spectroscopy that makes it possible to access the most distant stars in the Galactic disk and thereby understand the structure and evolution of the Milky Way in its innermost region, where interstellar extinction presents a serious problem <cit.>. The primary objects of surveys in this region usually have high luminosity – namely, giants and supergiants. Recently, <cit.> found 9 LDR- relations using spectra of 8 stars (mainly giants) in the H-band (14 000-18 000 Å) for ranging from 4 000 to 5 800 K with uncertainties of ∼60 K. Later, <cit.> increased the number of calibrations to 11 and achieved a precision of 35 K for the range 3 700<<5 000 K. Recently <cit.> report five new LDR- relations found in the H-band region and 21 new relations in the K-band. <cit.> found 81 calibrations for the within 3 700<<5 400 K, using spectra of 9 giants in the Y- and J-band, and <cit.> investigated the correlation between those calibrations and . Subsequently, <cit.> obtained new LDR pairs of 1-1 lines for red giants and supergiants with of 3 500–5 500 K. For spectra in the Y- and J-band, <cit.> developed a method for simultaneously determining and for FGK stars of all luminosity classes; in so doing, they used 13 calibrations to deduce and 9 calibrations to derive . All those calibrations were originally obtained in the IR range for the Y-, J-, H- and K-bands; however, they were only valid for rather low temperatures, while classical Cepheids reach above 6 000 K. In this paper we want to expand the number of relations available for Cepheid studies in the near-infrared range. Sect. <ref> describes the near-infrared spectra we used to search for new pairs of lines well-suited as temperature indicators, as described in Sect. <ref>. The new LDR- calibrations are then investigated in Sect. <ref>. Sect. <ref> summarizes our results. § SPECTROSCOPIC MATERIAL A large number of high-resolution spectra of six well-known bright classical Cepheids (Table <ref>) were obtained with GIANO <cit.>, a NIR cross-dispersed echelle spectrograph, operating at the 3.6m Telescopio Nazionale Galileo (TNG). It covers the wavelength range 9 500–24 500 Å and operates at a very high-resolving power (R≈50 000). Optical spectra were obtained in parallel with the High Accuracy Radial velocity Planet Searcher North spectrograph <cit.>. HARPS-N covers a large fraction of the optical range (Δλ=3 900–6 900 Å) at very high resolving power (R≈100 000). The observing log is given in Table <ref>. Five additional spectra (3 of them for the calibrating Cepheids) were obtained in the H-band with the Infrared Camera and Spectrograph (IRCS) at the Subaru 8.2m telescope with a resolving power of R≈20 000 (, see Table <ref>). Since we have no means to derive a priori their as we do not have simultaneous optical spectra for those stars, they were only used for testing the newly obtained relations. The spectral analysis (setting the continuum position, measuring line depths and equivalent widths) was carried out using the DECH software package [http://www.gazinur.com/DECH-software.html]. The absorption lines of Cepheids are usually fairly broad due to pressure and Doppler broadening together with a moderate rotation (ω≤10 km/s), and their a Voigt profile can be approximated by a Gaussian. However, they may become strongly asymmetric at some phases <cit.>. For this reason, we did not fit the entire profile but measured the line depths R_λ (that is, between the continuum and a parabola fit of the line core) as described by <cit.>. Typical number of data points on which we performed the parabolic fit is 4-5. The H-band spectra, in particular, are heavily contaminated by the absorption features caused by the Earth's atmosphere when observed from ground-based facilities. We did not perform a telluric correction, which consists in removing telluric features from the spectra. Instead, we used only wavelength ranges known to be practically free of telluric lines. We cannot exclude, however, that a few spectral lines are slightly contaminated by telluric lines. § SEARCHING FOR TEMPERATURE-SENSITIVE LINE PAIRS With the recent development of near-infrared spectrographs, it has become possible to extend the use of line depth ratios as indicators to this domain. <cit.> were the first to provide calibration relations, in the H-band (1.50–1.65 μm). However, the paucity of low-excitation lines in this wavelength range, together with the strong molecular bands and numerous telluric lines, limited the number of useful LDR pairs to nine. Such a small number limits the precision in to ≈50 K in the most favorable cases, while precisions of the order of 5–15 K can be routinely achieved in the optical thanks to a large number of available LDRs <cit.>. Later on, <cit.> extended this number to 81 covering the Y- and J-band. §.§ Searching for useful lines In this study, we adopted a new approach: first, we selected two spectra of classical Cepheids with temperatures of about 5000 and 6200 K, representative of the range of temperatures reached by this class of stars. Line depths were then measured for all the spectra, regardless of whether the lines were blended or not, also including lines that are not reliably identified. Only lines that could be measured both in the stars with of 5000 K and 6200 K were kept, in order to ensure that the final relations will be applicable over a broad range. For these lines, we then computed the ratios of their depths, R_6200/R_5000 and split them into three groups showing significant (1), moderate (2), or slight (3) variations with . Pairs most likely suitable for further testing were chosen from the first and third groups. We set as an additional condition that the distance between two lines composing a given pair should not exceed 300 Å. This algorithm yielded 1500 potentially useful line pairs. Finally, the selected lines were measured in all the spectra. The 1500 potential relations were visually inspected and fitted with polynomial relations. Ultimately, only the 87 best calibrations, accurate to within 150 K, were retained. They are shown in the Appendix (Figures <ref>-<ref>). Examining the atomic parameters of the lines in the selected calibrations allowed us to draw the two following conclusions: - Even lines with similar excitation potentials of the lower level (EPL) can show a good correlation with temperature. This unexpected conclusion can be explained as follows: if we consider two lines with close EPLs, but different oscillator strengths (), then at a given the weaker line may be located on the linear part of the curve of growth, while the stronger line lies on the horizontal part. Thus the ratio of the depths of these two lines will be sensitive to . An example for such a pair of lines is given in Fig. <ref>. As a consequence, such a calibration can only be used for a limited range of ; it presents however the advantage of being independent of luminosity (or ). Indeed, lines with different EPLs respond differently to variations. - Although one would expect that only unblended lines should be considered (leaving only a small number of them available in the H-band, for instance), it is nevertheless possible to use strong blends to derive calibrations, provided that these blends change monotonically, gradually and unequivocally with the variations. An example for such a line pair is shown in Fig. <ref>, corresponding to the calibration relation 76 (see Fig. <ref>) - We note in passing that in case a telluric line would accidentally superimpose on a stellar line (which is more likely to happen in the H-band), the stellar line is discarded. Spectral lines of supergiants are usually considerably wider than the telluric lines, as shown in Fig. <ref>. We did not use lines distorted by the influence of telluric lines. §.§ Calibrating relations For the TNG sample, the values used to calibrate the LDR relations have been derived from the optical HARPS-N spectra obtained quasi simultaneously. Indeed, the beginning of the exposures is shifted by only a few minutes, which is negligible since our 6 calibrating Cepheids have a period of ≈ 5 days and more. We used the LDR from <cit.> (typically 50-60 of them are available in a given HARPS-N spectrum). This ensures that the new NIR LDR will fall on the exact same scale as those derived in the optical. We retained as calibrating the mean value of each temperature derived from a single calibrating relation in the optical, and the uncertainty on is the standard deviation of these measurements, usually around 10–30 K. With both line depth ratios and values at hand, it is possible to derive analytical formulae for new calibrating relations in the near-infrared. Polynomials offer a simple way to derive analytical relations, but a number of our calibrating relations show specific features such as breaks that cannot be adequately described even by polynomials of the 5^th or higher degree (see Figs. <ref>–<ref>). Therefore, we also tried more complicated relations, such as exponential fits, logarithmic fits, power fits, the Hoerl function (y =a b^x x^c) and others. The type of function (and the corresponding coefficients) that yielded the lowest root-mean-square deviation σ for a given calibration was ultimately selected. In many cases, the precision of an individual calibration relation varies with . This is related to the fact that the line strengths vary with temperature. For instance, at high , absorption lines with low EPL become weaker, leading to greater uncertainties in the measurement of their depths, until they eventually disappear from the spectrum. To take this effect into account, we have defined as an optimum range for a given ratio the range within which the mean precision (σ) of the calibration relation remains within 160 K. Since various relations have various optimum ranges, we note that only a (large) subset of the 87 relations can be used for a Cepheid at a given temperature. This also holds for optical spectra and explains why the number of optical relations used to determine from the HARPS-N spectra varies from star to star. Uncertainties on the line-depth measurements mainly arise from uncertainties in setting the continuum position, hence the presence of noise or telluric lines. It can be determined from lines that fall twice on adjacent orders of the echelle spectra. This uncertainty is about 2-6% for spectra with a signal-to-noise ratio of about 100. A complete analysis of the errors associated with measuring line depths in spectra is given in <cit.>. Besides, individual stellar parameters such as metallicity, rotation, convection, NLTE effects, magnetic fields, binarity, etc., add to the scatter of the individual calibrations. An analysis of such effects was presented in the studies by <cit.>. The list of the calibrating relations, including the values for the coefficients, the intrinsic dispersion, and the applicability range, is given in Table <ref> (Appendix). They are displayed in Figs. <ref>-<ref>. § TESTING THE NEW LINE DEPTH RATIOS IN THE NIR The temperatures inferred from both the optical and NIR spectra and their respective uncertainties are given in Table <ref>. A direct comparison of these temperatures is shown in Fig. <ref>. As can be seen, the agreement is excellent, and the largest deviations are localized for the highest above 6200 K. Fig. <ref> provides an alternative look to the same data, displaying the variations of with the pulsation phase, where was computed with either the HARPS-N calibration data in the optical or the new NIR relations. The latter are provided for both the GIANO spectra and the SUBARU spectra, when available (see also Table <ref>). Also with such a point of view, the agreement remains excellent, with the largest deviations being confined to phases close to ϕ=0.0 where is maximal. We first notice that for T Vul, a classical Cepheid with the shortest period in the calibrating sample (and hence, the lowest luminosity and the largest surface gravity), the NIR temperatures near the peak are systematically lower than those deduced from optical spectra. Conversely, for S Vul, the long-period classical Cepheid with the highest luminosity (lowest surface gravity) in the calibrating sample, the NIR temperatures are higher than those deduced from optical LDR. This points toward a luminosity (or ) effect on the line depths ratios. Several (related) explanations can be proposed for such behavior. <cit.> already detected the effect of surface gravity on LDR. Indeed, for several pairs of lines, they noticed that the LDR– relations were offset between dwarfs on one hand, and giants and supergiants on the other hand. They found that the difference between the ionization potentials of lines in a given pair correlates with the sensitivity of this pair to . A detailed theoretical analysis of this effect can be found in <cit.> and <cit.>. In order to circumvent this drawback, they suggested calibrating separately dwarfs and giants/supergiants. However, in contrast with <cit.>, who report no effect within the giants-supergiants group, we find here significant effects (for a narrow range of pulsation phases) for Cepheids, that is, for stars within the giants/supergiants luminosity class. We note however that the range of luminosities for the six Cepheids in our calibrating sample is very wide and amounts to three magnitudes (their absolute magnitudes vary from –3 to –6 , see Table <ref>). This may indicate that the effect in Cepheids is not, or not only, a effect. For instance, values may also differ due to the differences in the optical depths of the line-forming regions for the optical and IR ranges. These differences can be significant at given pulsation phases, for instance, due to the shock wave passing through the upper atmospheric layers of Cepheids near the maximum compression. Indeed, <cit.> investigated CRIRES observations of the long-period Cepheid l Car and found, using an hydrodynamical model of this star <cit.>, that the core of the 1 line at 22 089,69 Å is formed at the top of the atmosphere, while the iron lines in the visible are formed much deeper in the atmosphere. They report additional evidence that lines in the infrared are formed closer to the surface of the star than lines in the optical, for instance the infrared radial velocity curve is shifted with respect to its optical counterpart, which they interpret as a manifestation of the Van Hoof effect <cit.>, the delay in the velocities between lines forming in the lower and upper atmosphere. Similarly, the mean radial velocity derived from infrared data differs by 0.53∓0.30 km s^-1 from the optical one, which they interpret as a different impact of granulation on line forming regions in the upper and lower atmosphere <cit.>. To wrap things up, it seems established that visible and infrared lines are formed at different depths in the atmosphere of a Cepheid, and thus in environments in which not only temperature and pressure are different, but also the velocity fields (due to the propagation of the compression wave). The latter is clearly visible in the different behaviour of line asymmetries for optical and infrared lines over the pulsation period <cit.>. Since we measure the line depths directly, without fitting a line profile, we assume that the differences between short- and long-period Cepheids at phases ϕ≈0.0 we observe in Fig <ref> mostly reflect temperature differences rather than uncertainties on measuring line depths related to different line asymmetries. Furthermore, we note that the theoretical analyses described in <cit.> and <cit.> are made under the Local Thermodynamical Equilibrium (LTE) assumption, while <cit.> have shown that NLTE effects are important in the atmospheres of Cepheids and maximal at the same phases (ϕ≈0.0) where the discrepancy between optical and NIR line depth ratios is significant. Finally, it is worth mentioning that long-period Cepheids are known to exhibit cycle-to-cycle variations <cit.>, including in their line profiles. However, this phenomenon cannot be invoked here since our optical and NIR have been observed simultaneously. Should long-period Cepheids be excluded from the calibration of the LDR, then their could not be determined and hence their chemical composition would remain unknown. § SUMMARY AND CONCLUSION In the present study, we have derived 87 temperature calibrations, LDR-, using GIANO high-dispersion near-IR H-band spectra covering the wavelength range from 14 000 to 16 500 Å that contains numerous atomic lines and molecular bands. The temperatures inferred from the optical spectra obtained in parallel with the HARPS-N spectrograph were adopted as original temperatures to derive calibration relations. The resulting temperature relations are based on 115 spectra of six classical Cepheids. The calibrations are valid for supergiants with a near-solar metallicity, ranging from 4800 to 6500 K and from –3 to –6 mag. The uncertainties due to the effect of luminosity at temperatures above 6200 K are within 150 K. The typical mean uncertainty per calibration relation is 60-70 K (40-45 K for the most precise ones and 140-160 K for the least precise ones). Using about 60-70 calibrations improves the intrinsic precision to within 30-50 K (for spectra with an S/N of 100-150). Employing this method, we can derive temperatures of highly reddened objects (such as stars towards the Galactic centre). Adopting these calibrations has already enabled us to determine the temperatures of four Cepheids in the Galactic centre discovered by <cit.> in order to derive their chemical composition <cit.>. Since many Cepheids have been detected in highly reddened regions, for instance beyond the Galactic center in the far side of the disk <cit.>, the newly determined LDR will allow us to derive their chemical composition using NIR spectra. To our knowledge, only <cit.> tackled this problem so far, determining the metallicity of 5 Cepheids candidates in the inner disk by comparing low-resolution (R≈3 000) NIR spectra to a pre-computed grid of synthetic spectra. Obtaining spectroscopic time-series for a given star would make it possible to track tiny variations, potentially related to rotational modulation such as those that have already been detected for dwarf stars – namely, the G8 dwarf ξ Bootis A <cit.> and the K0 dwarf σ Dra <cit.>. This technique is already being used to study spot activity in giants <cit.>. In this respect, hemisphere-averaged temperatures of stars with surface inhomogeneities derived from NIR lines simultaneously to optical lines can be of great help for starspot modelling. Indeed, one expects different average temperatures at different wavelengths due to the wavelength dependence of the contribution of starspots to the total flux. It would be interesting to search simultaneously for systematic variations in spectral line asymmetries in order to better understand the physics of pulsations in Cepheids. As far as Cepheids are concerned, simultaneous time-series spectroscopy in the optical and infrared domain are crucial to refine our understanding of the Cepheids' atmosphere dynamics. In the present paper, the calibration sample is confined to objects with a near-solar metallicity [Fe/H] to circumvent the issue of the dependence of calibrations on [Fe/H]. Investigating such a dependence of calibrations on [Fe/H] will be the goal of further studies. § DATA AVAILABILITY This research used the facilities of the Italian Center for Astronomical Archive (IA2) operated by INAF at the Astronomical Observatory of Trieste, programme (PI: N.Nardetto). The Subaru/IRCS spectra are available at the SMOKA Science Archive https://smoka.nao.ac.jp/. § ACKNOWLEDGEMENTS We thank our referee, Dr. Antonio Frasca, for his important comments, which improved our manuscript. VK is grateful to the Vector-Stiftung at Stuttgart, Germany, for support within the program "2022–Immediate help for Ukrainian refugee scientists" under grant P2022-0064. 99 [Afşar et al.2023]Afsar2023 Afşar M., Bozkurt Z., Topcu G. B., Özdemir S., Sneden Chr., Mace G. N., Jaffe D. T., López-Valdivia R., 2023, ApJ, 949, 86A [Alonso et al.1996]Alonso1996 Alonso A., Arribas S., Martinez-Roger C., 1996, A&A, 313, 873 [Anderson2016]Anderson2016 Anderson R. I., 2016, MNRAS 463, 1707 [Andrievsky et al.2002a]Andrievsky2002a Andrievsky S. M., Kovtyukh V. V., Luck R. E., Lepiné J. R. D., Bersier D., Maciel W. J., Barbuy B., Klochkova V.G., Panchuk V. E., Karpischek R. U., 2002a, A&A, 381, 32 [Andrievsky et al.2002b]Andrievsky2002b Andrievsky S. M., Bersier D., Kovtyukh V. V., Luck R. E., Maciel W. J., Lepniné J. R. D., Beletsky Yu. V., 2002b, A&A, 384, 140 [Andrievsky et al.2002c]Andrievsky2002c Andrievsky S. M., Kovtyukh V. V., Luck R. E., Leépine J. R. D., Maciel W. J., Beletsky Yu. V., 2002c, A&A, 392, 491 [Andrievsky et al.2004]Andrievsky2004 Andrievsky S. M., Luck R. E., Martin P., Leépine J. R. D., 2004, A&A, 413, 159 [Andrievsky et al.2005]Andrievsky2005 Andrievsky S. M., Luck R. E., Kovtyukh V. V., 2005, AJ, 130, 1880 [Berdyugina et al.2005]Berdyugina2005 Berdyugina S. V., 2005, Living Rev. Sol. Phys., 2, 8 [Bessell et al.1998]Bessell1998 Bessell M. S., Castelli F., Plez B., 1998, A&A, 333, 231 [Biazzo et al.2004]Biazzo2004 Biazzo K., Catalano S., Frasca A., Marilli E., 2004, Mem. Soc. Astron. Ital. Supplementi, 5, 109 [Biazzo et al.2006]Biazzo2006 Biazzo K. , Frasca A., Catalano S., Marilli E., 2006, preprint (arXiv astro-ph/0610584) [Biazzo et al.2007]Biazzo2007 Biazzo K., Frasca A., Catalano S., Marilli E., 2007, Astron. Nachr., 328, 938 [Blackwell & Shallis1977]Blackwell1977 Blackwell D. E., Shallis M. J., 1977, MNRAS, 180, 177 [Caccin et al.2002]Caccin2002 Caccin B., Penza V., Gomez M. T., 2002, A&A, 386, 286 [Catalano et al.2002]Catalano2002 Catalano S., Biazzo K., Frasca A., Marilli E., 2002, A&A, 394, 1009 [Cayrel & Cayrel1963]Cayrel1963 Cayrel G., Cayrel R., 1963, ApJ, 137, 431 [Chen et al.2018]Chen2018 Chen X., Wang S., Deng L., de Grijs R., Yang M., 2018, ApJS, 237, 28 [Cosentino et al.2012]Cosentino2012 Cosentino R. et al., 2012, in McLean I. S., Ramsay S. K., Takami H.eds, Proc. SPIE Conf. Ser. Vol 8446, Ground-based and Airborne Instrumentation for Astronomy IV. SPIE, Bellingham. p. 84461V [da Silva et al.2016]daSilva2016 da Silva R. et al., 2016, A&A, 586, A125 [da Silva et al.2022]daSilva2022 da Silva R. et al., 2022, A&A, 661, A104 [Davis & Webb1974]Davis1974 Davis J., Webb R. J., 1974, MNRAS, 168, 163 [Feast et al.2014]Feast2014 Feast M. W., Menzies J. W., Matsunaga N., Whitelock P. A., 2014, Nature, 509, 342 [Fraska et al.2005]Fraska2005 Frasca A., Biazzo K., Catalano S., Marilli E., Messina S., Rodonò M., 2005, A&A, 432, 647 [Frasca et al.2008]Frasca2008 Frasca A., Biazzo K., Taş G., Evren S., Lanzafame A. C., 2008, A&A, 479, 557 [Fukue et al.2015]Fukue2015 Fukue K. et al., 2015, ApJ, 812, 64 [Gehren1981]Gehren1981 Gehren T., 1981, A&A 100, 97 [Genovali et al.2013]Genovali2013 Genovali K., et al, 2013, 554, 132 [Genovali et al.2014]Genovali2014 Genovali K., et al, 2014, 566, 37 [Genovali et al.2015]Genovali2015 Genovali K., et al, 2015, 580, 17 [Gray1989]Gray1989 Gray D. F., 1989,ApJ 347, 1021 [Gray & Johanson1991]Gray1991 Gray D. F., Johanson H. L., 1991, PASP 103 439 [Gray et al.1992]Gray1992 Gray, D. F., Baliunas, S. L., Lockwood G. W., Skiff B. A., 1992, ApJ 400, 681 [Gray1994]Gray1994 Gray D. F., 1994, PASP 106, 1248 [Gray & Brown2001]Gray2001 Gray D. F., Brown K., 1994, PASP 113, 723 [Gray2005]Gray2005 Gray D. F., The Observation and Analysis of Stellar Photospheres, 2005, 3rd edn., Cambridge Univ. Press, Cambridge, UK [Hanke et al.2018]Hanke2018 Hanke M., Hansen C. J., Koch A., Grebel E. K., 2018, A&A, 619, A134 [Inno et al.2019]Inno2019 Inno L. et al., 2019, MNRAS 482, 83 [Jian et al.2019]Jian2019 Jian M., Matsunaga N., Fukue K., 2019, MNRAS, 485, 1310 [Jian et al.2020]Jian2020 Jian M. et al., 2020, MNRAS 494, 1724 [Kobayashi et al.2000]Kobayashi2000 Kobayashi N. et al., 2000, in Iye M., Moorwood A. F.eds, Proc. SPIE Conf. Ser. Vol. 4008, Optical and IR Telescope Instrumentation and Detectors. SPIE, Bellingham. p. 1056 [Kovtyukh2007]Kovtyukh2007 Kovtyukh V. V., 2007, MNRAS, 378, 617 [Kovtyukh & Andrievsky1999]Kovtyukh1999 Kovtyukh V. V. , Andrievsky S. M., 1999, A&A, 351, 597 [Kovtyukh et al.2019]Kovtyukh19 Kovtyukh V. et al., 2019, MNRAS, 488, 3211 [Kovtyukh & Gorlova2000]Kovtyukh2000 Kovtyukh V. V., Gorlova N. I., 2000, A&A, 358, 587 [Kovtyukh et al.2003]Kovtyukh2003 Kovtyukh V. V., Soubiran C., Belik S. I., Gorlova N. I., 2003, A&A, 411, 559 [Kovtyukh et al.2005a]Kovtyukh2005a Kovtyukh V. V., Andrievsky S. M., Belik S. I., Luck R. E., AJ, 129, 433 [Kovtyukh, Wallerstein, & Andrievsky2005b]Kovtyukh2005b Kovtyukh V. V., Wallerstein G., Andrievsky S. M., 2005, PASP, 117, 1173 [Kovtyukh, Wallerstein, & Andrievsky2005c]Kovtyukh2005c Kovtyukh V. V., Wallerstein G., Andrievsky S. M., 2005, PASP, 117, 1182 [Kovtyukh et al.2006]Kovtyukh2006 Kovtyukh V. V., Soubiran C., Bienaymé O., Mishenina T. V., Belik S. I., 2006, MNRAS, 371, 879 [Kovtyukh et al.2016]Kovtyukh2016 Kovtyukh V. et al., 2016, MNRAS, 460, 2077 [Kovtyukh et al.2018a]Kovtyukh2018a Kovtyukh V. et al., 2018a, PASP, 130, 54201 [Kovtyukh et al.2018b]Kovtyukh2018b Kovtyukh V., Yegorova I., Andrievsky S., Korotin S., Saviane I., Lemasle B., Chekhonadskikh F., Belik S., 2018b, MNRAS, 477, 2276 [Kovtyukh et al.2019]Kovtyukh2019 Kovtyukh V. et al., 2019, MNRAS, 488, 3211 [Kovtyukh et al.2022]Kovtyukh2022 Kovtyukh V. V., Korotin S. A., Andrievsky S. M., Matsunaga N., Fukue K., 2022, MNRAS, 516, 4269 [Lemasle et al.2007]Lemasle2007 Lemasle B., François P., Bono G., Mottini M., Primas F., Romaniello M., 2007, A&A, 467, 283 [Lemasle et al.2008]Lemasle2008 Lemasle B., François P., Piersimoni A., Pedicelli S., Bono G., Laney C. D., Primas F., Romaniello M., 2008, A&A , 490, 613 [Lemasle et al.2013]Lemasle2013 Lemasle B. et al., 2013, A&A, 558, A31 [Lemasle et al.2015]Lemasle2015 Lemasle B. et al., 2015, A&A , 579, A47 [Lemasle et al.2017]Lemasle2017 Lemasle B. et al., 2017, A&A, 608, A85 [Lemasle et al.2018]Lemasle2018 Lemasle B. et al., 2018, A&A, 618, A160 [Lemasle et al.2020]Lemasle2020 Lemasle B. , Hanke M., Storm J., Bono G., Grebel E. K., 2020, A&A, 641, A71 [Luck2018]Luck2018b Luck R. E., 2018, AJ, 156, 171 [Luck & Lambert2011]Luck2011b Luck R. E., Lambert D. L., 2011, AJ, 142, 136 [Luck & Andrievsky2004]Luck2004 Luck R. E., Andrievsky S. M., 2004, AJ, 128, 343 [Luck et al.2003]Luck2003 Luck R. E. , Gieren W. P., Andrievsky S. M., Kovtyukh V. V., Fouqué P., Pont F., Kienzle F., 2003, A&A, 401, 939 [Luck et al.2006]Luck2006 Luck R. E. , Kovtyukh V. V., Andrievsky S. M., 2006, AJ, 132, 902 [Luck et al.2008]Luck2008 Luck R. E., Andrievsky S. M., Fokin A., Kovtyukh V. V., 2008, AJ, 136, 98 [Luck et al.2011]Luck2011a Luck R. E., Andrievsky S. M., Kovtyukh V. V., Gieren W., Graczyk D., 2011, AJ, 142, 51 [Martin et al.2015]Martin2015 Martin R. P., Andrievsky S. M., Kovtyukh V . V ., Korotin S. A., Yegorova I. A., Saviane I., 2015, MNRAS, 449, 4071 [Matsunaga et al.2011]Matsunaga2011 Matsunaga N., Kawadu T., Nishiyama Sh., Nagayama T., Kobayashi N., Tamura M., Bono G., Feast M. W., Nagata T., 2011, Nature, 477, Iss. 7363, 188 [Matsunaga et al.2013]Matsunaga2013 Matsunaga N., Feast M. W., Kawadu T., Nishiyama Sh., Nagayama T., Nagata T., Tamura M., Bono G., Kobayashi N., 2013, MNRAS 429, 385 [Matsunaga et al.2015]Matsunaga2015 Matsunaga N., Fukue K., Yamamoto R., Kobayashi N., Inno L., Genovali K., Bono G., et al., 2015, ApJ, 799, 46 [Matsunaga et al.2016]Matsunaga2016 Matsunaga N., Feast M. W., Bono G., Kobayashi N., Inno L., Nagayama T., Nishiyama S., et al., 2016, MNRAS, 462, 414. [Matsunaga2017]Matsunaga2017 Matsunaga N., 2017, in European Physical Journal Web of Conferences, 152, p.01007 preprint ( arXiv:1705.02547 ) [Matsunaga et al.2021]Matsunaga2021 Matsunaga N., Jian M., Taniguchi D., Elgueta S S., 2021, MNRAS 506, 1031 [Nardetto et al.2006]Nardetto2006 Nardetto N., Mourard D., Kervella P., Mathias P., Mérand A., Bersier D., 2006, A&A, 453, 309 [Nardetto et al.2007]Nardetto2007 Nardetto N., Mourard D., Mathias P., Fokin A., Gillet D., 2007, A&A, 471, 661 [Nardetto et al.2008a]Nardetto2008a Nardetto N., Stoekl A., Bersier D., Barnes T. G., 2008a, A&A, 489, 1255 [Nardetto et al.2008b]Nardetto2008b Nardetto N., Groh J. H., Kraus S., Millour F., Gillet D., 2008b, A&A, 489, 1263 [Nardetto et al.2018]Nardetto2018 Nardetto N. et al., 2018, A&A, 616, A92 [Nardetto et al.2023]Nardetto2023 Nardetto N. et al., 2023, A&A , 671, A14 [Ness et al.2015]Ness2015 Ness M., Hogg D. W., Rix H. W., Ho A. Y. Q., Zasowski G., 2015, ApJ, 808, 16 [Origlia et al.2014]Origlia2014 Origlia L. et al., 2014, in Ramsay S. K., McLean I. S., Takami H. eds, Proc. SPIE Conf. Ser. Vol. 9147, Ground-based and Airborne Instrumentation for Astronomy V. SPIE, Bellingham. p. 91471E [Pedicelli et al.2010]Pedicelli2010 Pedicelli S. et al., 2010, A&A, 518, A11 [Proxauf et al.2018]Proxauf2018 Proxauf B. et al., 2018, A&A, 616, A82 [Recio-Blanco et al.2006]Recio-Blanco2006 Recio-Blanco A., Bijaoui A., Laverny P., 2006, MNRAS, 370, 141 [Romaniello et al.2008]Romaniello2008 Romaniello M. et al., 2008, A&A, 488, 731 [Romaniello et al.2022]Romaniello2022 Romaniello M. et al., 2022, A&A, 658, A29 [Strassmeier & Schordan2000]Strassmeier2000 Strassmeier K. G., Schordan P., 2000, Astron. Nachr., 321, 277 [Taniguchi et al.2018]Taniguchi2018 Taniguchi D. et al., 2018, MNRAS, 473, 4993 [Taniguchi et al.2021]Taniguchi2021 Taniguchi D. et al., 2021, MNRAS, 502, 4210 [Toner & Gray1988]Toner1988 Toner C. G., Gray D. F., 1988, ApJ 334, 1008 [van Hoof & Struve1953]VanHoof1953 van Hoof A. Struve O., 1953, PASP 65, 158 [Vasilyev et al.2017]Vasilyev2017 Vasilyev V. Ludwig H.-G. Freytag B. Lemasle B. Marconi M., 2017, A&A, 606, 140 [Vasilyev et al.2018]Vasilyev2018 Vasilyev V., Ludwig H. -G., Freytag B., Lemasle B., Marconi M., A&A 611, 19 [Vasilyev et al.2019]Vasilyev2019 Vasilyev V., Amarsi A. M., Ludwig, H. -G., Lemasle B., A&A 624, 85 § CALIBRATIONS lcccccrrrrrcr Individual determinations for calibrating Cepheids. For each individual spectrum, the table provides the observing log, including the pulsation phase of the Cepheid, the temperature and its associated uncertainty, the number of calibrations used for both the calibrating (optical) HARPS-N spectra and the near-infrared GIANO spectra. 4cGIANO 4cHARPS-N Cep UT Date UT JD phase σ N σ/√(N) σ N σ/√(N) (K) (K) (K) (K) (K) (K) continued. 4cGIANO 4cHARPN Cep UT Date UT JD phase σ N σ/√(N) σ N σ/√(N) (K) (K) (K) (K) (K) (K) δ Cep 2019-07-14 03-06-55 58678.62980 .087 6219 103 16 25.7 6361 54 67 6.6 δ Cep 2019-07-15 04-30-38 58679.68793 .284 5847 76 35 12.9 5869 65 79 7.3 δ Cep 2019-07-16 04-39-15 58680.69392 .471 5644 88 49 12.6 5618 71 81 7.9 δ Cep 2019-07-17 02-17-24 58681.59541 .639 5537 95 68 11.5 5482 46 76 5.2 δ Cep 2019-07-18 02-38-33 58682.61010 .828 5612 102 43 15.6 5738 73 79 8.2 δ Cep 2019-07-19 04-30-31 58683.68785 .029 6295 87 12 25.0 6486 60 65 7.4 δ Cep 2019-08-08 03-57-43 58703.66508 .752 5513 84 66 10.4 5520 46 69 5.5 δ Cep 2019-08-18 04-54-54 58713.70479 .623 5436 127 59 16.6 5494 61 77 7.0 δ Cep 2019-08-21 02-23-57 58716.59996 .162 6098 137 26 26.8 6145 68 74 7.9 δ Cep 2019-08-23 00-41-51 58718.52906 .522 5548 73 59 9.4 5570 59 77 6.8 δ Cep 2019-08-25 00-31-23 58720.52179 .893 6000 212 17 51.5 6115 91 74 10.6 δ Cep 2019-08-26 05-07-52 58721.71379 .115 6197 120 24 24.5 6273 63 68 7.6 δ Cep 2019-09-08 00-07-51 58734.50545 .499 5559 81 60 10.4 5593 70 77 7.9 δ Cep 2019-09-10 02-48-54 58736.61729 .893 ⋯ ⋯ ⋯ ⋯ 6139 111 74 12.9 δ Cep 2019-09-10 21-51-31 58737.41077 .041 6423 129 15 33.2 6478 δ Cep 2019-09-13 00-10-54 58739.50756 .431 5642 91 51 12.7 5639 82 80 9.2 δ Cep 2019-09-21 00-46-38 58747.53238 .927 6120 97 18 22.9 6339 63 67 7.7 S Sge 2019-04-27 05:26:53 58600.72700 .519 5474 68 79 7.7 5460 107 71 12.7 S Sge 2019-04-28 05:20:20 58601.72245 .638 5314 74 82 8.1 5247 143 57 18.9 S Sge 2019-05-03 05:23:26 58606.72460 .235 5942 113 33 19.7 5986 105 76 12.1 S Sge 2019-05-04 05:13:11 58607.71748 .353 5704 81 45 12.1 5709 135 80 15.1 S Sge 2019-05-05 05:08:59 58608.71457 .472 5529 74 73 8.7 5534 135 73 15.8 S Sge 2019-05-06 05:12:32 58609.71703 .592 5355 72 84 7.9 5371 142 69 17.1 S Sge 2019-05-08 05:34:49 58611.73251 .832 5779 99 15 25.4 5825 122 44 18.5 S Sge 2019-05-09 05:30:37 58612.72959 .951 6221 173 26 33.9 6186 125 70 15.0 S Sge 2019-05-10 05:07:57 58613.71385 .069 6130 127 33 22.0 6098 138 68 16.7 S Sge 2019-05-17 03:44:34 58620.65594 .897 5898 185 28 35.0 5901 151 68 18.3 S Sge 2019-05-18 03:14:19 58621.63494 .014 6204 142 28 26.8 6250 149 56 19.9 S Sge 2019-05-19 03:05:31 58622.62883 .132 6013 106 35 17.9 6011 117 66 14.4 S Sge 2019-05-20 03:39:47 58623.65262 .255 6018 152 33 26.5 6013 134 78 15.2 S Sge 2019-05-21 03:36:57 58624.65065 .374 5666 76 46 11.3 5672 110 69 13.2 S Sge 2019-05-22 03:35:59 58625.64998 .493 5492 53 75 6.1 5451 130 77 14.8 S Sge 2019-05-23 03:09:11 58626.63137 .610 5290 66 77 7.5 5356 143 65 17.8 S Sge 2019-05-24 03:45:09 58627.65635 .732 5354 67 80 7.5 5385 143 62 18.2 S Sge 2019-06-07 02:57:32 58641.62328 .398 5578 113 50 16.0 5611 105 81 11.6 S Sge 2019-06-09 02:38:55 58643.61035 .636 5322 60 75 6.9 5315 141 69 17.0 S Sge 2019-06-11 02:09:16 58645.58976 .872 5823 105 31 18.9 5905 136 63 17.1 S Sge 2019-06-13 01:13:25 58647.55098 .106 5964 129 36 21.5 6060 147 70 17.6 S Sge 2019-06-20 00:30:13 58654.52098 .937 6026 200 31 35.9 6055 153 66 18.9 S Sge 2019-08-08 03:37:25 58703.65098 .798 5492 170 59 22.2 5480 183 47 26.6 S Sge 2019-08-18 22:30:19 58714.43771 .085 6001 111 30 20.3 6067 123 77 14.0 S Sge 2019-09-07 22:21:46 58734.43178 .471 5496 66 74 7.6 5472 78 73 9.2 S Sge 2019-09-18 23:51:33 58745.49413 .790 5555 129 41 20.1 5626 154 58 20.2 T Vul 2018-08-09 23-17-25 58340.47042 .534 5573 167 39 26.7 5611 146 52 20.2 T Vul 2018-08-15 20-53-51 58346.37072 .864 6191 105 9 35.0 6299 157 37 25.8 T Vul 2018-08-17 23-15-26 58348.46905 .337 5763 133 32 23.5 5753 124 66 15.3 T Vul 2018-08-24 21-01-38 58355.37613 .894 6325 267 11 80.6 6436 125 41 19.4 T Vul 2018-08-25 00-46-59 58355.53262 .929 6330 182 13 50.5 6395 146 53 20.1 T Vul 2018-08-25 20-49-05 58356.36741 .118 6065 147 22 31.3 6144 137 70 16.3 T Vul 2018-08-26 02-18-48 58356.59638 .169 5997 143 32 25.2 6043 137 54 18.7 T Vul 2018-08-29 01-41-14 58359.57030 .840 6049 211 20 47.2 6107 143 53 19.7 T Vul 2018-09-07 01-42-29 58368.57116 .869 6083 161 22 34.3 6267 155 58 20.3 T Vul 2018-09-10 20-25-15 58372.35086 .721 5638 216 22 46.0 5625 124 55 16.7 T Vul 2018-09-12 00-55-50 58373.53877 .989 6306 146 13 40.4 6495 177 48 25.5 T Vul 2018-09-12 02-57-19 58373.62313 .008 6096 198 5 88.6 6297 138 55 18.6 T Vul 2018-10-01 00-07-06 58392.50493 .265 5898 126 10 39.9 5850 113 79 12.8 T Vul 2018-12-10 19-15-42 58463.30256 .227 ⋯ ⋯ ⋯ ⋯ 5852 149 68 18.0 T Vul 2018-12-20 19-22-38 58473.30738 .482 5599 229 19 52.6 5615 160 57 21.2 T Vul 2019-08-08 03-27-40 58703.64421 .413 5681 178 52 24.7 5609 140 60 18.1 T Vul 2019-08-16 20-41-57 58712.36246 .379 5712 139 26 27.3 5734 142 78 16.0 T Vul 2019-09-08 01-13-05 58734.55075 .381 5634 123 37 20.1 5700 134 61 17.1 T Vul 2019-09-10 20-07-42 58737.33868 .010 6362 235 10 74.4 6415 151 63 19.0 X Cyg 2019-04-27 05-10-08 58600.71537 .360 5108 40 81 4.4 5094 114 60 14.7 X Cyg 2019-04-28 05-32-30 58601.73090 .422 5047 56 78 6.3 5016 90 70 10.8 X Cyg 2019-05-01 05-37-25 58604.73431 .605 4931 63 18 14.8 4865 90 52 12.5 X Cyg 2019-05-03 05-13-56 58606.71800 .726 5191 143 69 17.2 5183 112 41 17.6 X Cyg 2019-05-04 04-58-11 58607.70707 .787 5416 97 79 10.9 5366 78 59 10.2 X Cyg 2019-05-05 04-54-06 58608.70423 .848 5418 68 81 7.6 5376 77 67 9.4 X Cyg 2019-05-06 04-57-28 58609.70657 .909 5973 111 31 20.0 6008 141 80 15.8 X Cyg 2019-05-10 05-18-49 58613.72140 .154 5400 71 83 7.8 5432 124 74 14.5 X Cyg 2019-05-17 03-30-41 58620.64630 .576 ⋯ ⋯ ⋯ ⋯ 4851 76 57 10.0 X Cyg 2019-05-18 03-56-20 58621.66412 .638 ⋯ ⋯ ⋯ ⋯ 4913 97 55 13.1 X Cyg 2019-05-19 03-15-30 58622.63576 .698 5126 86 64 10.7 5068 130 57 17.2 X Cyg 2019-05-20 03-29-31 58623.64549 .759 5333 94 73 11.0 5339 104 53 14.3 X Cyg 2019-05-21 03-26-24 58624.64333 .820 5373 89 79 10.0 5346 72 60 9.3 X Cyg 2019-05-22 03-20-50 58625.63946 .881 5638 99 50 14.0 5667 73 80 8.2 X Cyg 2019-05-23 02-59-07 58626.62438 .941 6058 127 24 25.8 6109 137 72 16.1 X Cyg 2019-05-24 03-24-59 58627.64234 .003 5944 132 34 22.6 5963 117 79 13.1 X Cyg 2019-06-06 04-54-08 58640.70425 .800 5368 91 80 10.2 5347 78 61 9.9 X Cyg 2019-08-08 03-46-45 58703.65746 .642 ⋯ ⋯ ⋯ ⋯ 4924 111 55 15.0 X Cyg 2019-08-22 02-45-33 58717.61496 .494 4944 46 39 7.4 4935 84 67 10.3 X Cyg 2019-09-07 23-41-56 58734.48745 .524 4933 67 12 19.4 4914 97 65 12.0 X Cyg 2019-09-19 23-41-13 58746.48695 .256 5265 51 73 5.9 5220 72 72 8.5 SV Vul 2019-04-27 04:52:25 58600.70306 .139 5967 150 35 25.4 5843 139 73 16.2 SV Vul 2019-04-28 05:10:03 58601.71531 .161 5862 134 36 22.4 5758 137 81 15.2 SV Vul 2019-05-01 05:09:53 58604.71519 .228 5599 137 70 16.4 5513 141 77 16.1 SV Vul 2019-05-04 05:22:10 58607.72372 .295 5431 91 81 10.2 5351 109 72 12.9 SV Vul 2019-05-17 03:53:28 58620.66212 .583 5049 71 73 8.4 5080 84 61 10.7 SV Vul 2019-05-21 03:47:34 58624.65803 .672 5003 89 69 10.7 4987 97 69 11.7 SV Vul 2019-06-07 03:06:45 58641.62968 .051 6133 136 26 26.7 6167 131 67 16.0 SV Vul 2019-06-11 00:58:23 58645.54054 .138 5937 109 36 18.1 5834 125 71 14.8 SV Vul 2019-06-20 01:29:58 58654.56247 .339 5327 113 81 12.5 5290 119 74 13.9 SV Vul 2019-07-15 03:16:09 58679.63621 .897 5323 93 78 10.5 5273 96 55 12.9 SV Vul 2019-07-18 03:27:34 58682.64414 .964 6121 145 25 29.0 6166 88 52 12.2 SV Vul 2019-08-08 03:17:46 58703.63733 .432 5204 105 78 11.9 5185 113 71 13.4 SV Vul 2019-08-09 21:23:23 58705.39123 .471 5153 90 80 10.1 5149 118 69 14.2 SV Vul 2019-08-18 22:21:18 58714.43145 .672 4979 105 46 15.4 4946 109 57 14.5 SV Vul 2019-08-21 01:27:25 58716.56070 .720 4968 102 46 15.0 4932 146 40 23.0 SV Vul 2019-08-22 20:51:45 58718.36927 .760 4937 107 65 13.3 4957 149 31 26.8 SV Vul 2019-08-25 02:36:38 58720.60877 .810 ⋯ ⋯ ⋯ ⋯ 4983 140 39 22.5 SV Vul 2019-08-27 20:46:13 58723.36542 .871 5137 82 62 10.4 5183 167 48 24.1 SV Vul 2019-08-30 20:20:17 58726.34741 .938 5698 137 14 36.6 5766 138 52 19.2 SV Vul 2019-09-03 20:30:20 58730.35439 .027 6257 158 24 32.3 6253 117 62 14.8 SV Vul 2019-09-07 20:35:19 58734.35785 .116 6034 135 26 26.4 6026 160 42 24.7 SV Vul 2019-09-19 23:32:17 58746.48075 .386 5267 89 81 9.9 5194 137 70 16.4 S Vul 2019-04-27 04:11:07 58600.67438 .792 5282 89 72 10.5 5223 119 60 15.4 S Vul 2019-05-06 05:22:17 58609.72380 .925 5856 154 35 26.1 5773 112 47 16.3 S Vul 2019-05-10 04:43:10 58613.79664 .985 5990 163 32 28.7 5971 121 56 16.2 S Vul 2019-05-18 03:29:57 58621.64579 .101 6069 117 33 20.4 5891 132 72 15.5 S Vul 2019-05-24 03:55:57 58627.66385 .189 5900 142 35 24.0 5748 134 80 15.0 S Vul 2019-06-06 05:05:16 58640.71199 .381 5478 136 74 15.9 5438 139 75 16.1 S Vul 2019-07-14 02:58:45 58678.62413 .938 5822 157 32 27.8 5779 152 42 23.5 S Vul 2019-07-19 04:19:23 58683.68012 .012 5965 145 33 25.2 5980 155 54 21.1 S Vul 2019-09-03 21:22:41 58730.39075 .699 5201 79 79 8.9 5160 123 70 14.7 S Vul 2019-09-18 23:28:46 58745.47831 .920 5548 139 43 21.1 5566 135 53 18.5 rclclllrrrrrr LDR- calibration relations. For each relation, the wavelengths of both lines and the corresponding chemical element are provided, together with the analytic function type, the value of the coefficients, the average accuracy of the calibration and the temperature range wherein it can be used. When one of the lines is blended, representing two or more elements, only the elements which predominantly contribute to the blend are indicated. N Lambda1 El 1 Lambda2 El 2 Name = sigma a b c d (Å) (Å) (K) (K) continued. N Lambda1 El 1 Lambda2 El 2 Name = sigma a b c d (Å) (Å) (K) (K) 1 14968.327 1 15024.992 1 Quadratic Fit a+br+cr^2 81 4800–5550 5664.6 –115.54 –4632.1 ⋯ 2 14968.327 1 15317.843 1 Modified Exponential a e^b/r 70 4950–5550 5831.4 –0.17068 ⋯ ⋯ 3 15017.700 1 15317.843 1 Modified Hoerl Model ab^1/rr^c 66 4900–5550 6022.1 0.84430 –0.032450 ⋯ 4 15024.992 1 15221.551 1 Logarithm Fit a+bln(r) 56 4950–5700 4662.1 320.08 ⋯ ⋯ 5 15024.992 1 15328.367 1 Modified Hoerl Model ab^1/rr^c 71 5000–5650 5079.3 0.84156 0.044211 ⋯ 6 15040.246 1 + 15317.843 1 Hoerl Model a(b^r)(r^c) 62 4800–5650 4431.1 0.99541 0.11299 ⋯ 7 15047.705 1 15317.843 1 Modified Hoerl Model ab^1/rr^c 54 4800–5650 5263.3 0.78937 0.030078 ⋯ 8 15047.705 1 15387.803 1 Linear Fit a+br 124 5000–6250 4367.8 354.19 ⋯ ⋯ 9 15051.749 1 15317.843 1 Modified Exponential ae^b/r 61 4850–5600 5769.8 –0.32729 ⋯ ⋯ 10 15063.513 1 15403.791 1 Shifted Power Fit a(r–b)^c 130 4900–5900 4738.9 –0.050804 –0.10471 ⋯ 11 15077.287 1 15422.276 1 Quadratic Fit a+br+cr^2 81 5100–6350 6713.5 –2587.1 1006.5 ⋯ 12 15094.695 1 15317.843 1 Logarithm Fit a+bln(r) 74 4800–6000 4843.3 537.54 ⋯ ⋯ 13 15178.422 1+1 15210.356 1 ? Linear Fit a+br 90 5000–5950 4816.9 208.72 ⋯ ⋯ 14 15178.422 1+1 15403.791 1 Logarithm Fit a+bln(r) 144 4950–6350 5100.0 –775.93 ⋯ ⋯ 15 15178.422 1+1 15422.276 1 Modified Hoerl Model ab^1/rr^c 140 5000–6500 4992.1 0.99911 –0.15195 ⋯ 16 15178.422 1+1 15469.816 1 Logarithm Fit a+bln(r) 126 5100–6300 5162.0 –826.02 ⋯ ⋯ 17 15178.422 1+1 15478.482 1+1 Quadratic Fit a+br+cr^2 122 5100–6300 7008.1 –3996.6 1901.3 ⋯ 18 15207.526 1 15317.843 1 Modified Exponential ae^b/r 46 4850–5550 5730.2 –0.27878 ⋯ ⋯ 19 15207.526 1 15328.367 1 Modified Exponential ae^b/r 105 4950–5700 5980.2 –0.34844 ⋯ ⋯ 20 15207.526 1 15490.882 1 ? Quadratic Fit a+br+cr^2 100 4800–5750 3954.0 639.38 –50.183 ⋯ 21 15210.356 1 ? 15217.777 1 Quadratic Fit a+br+cr^2 81 4950–5650 6091.5 –3312.0 2270.0 ⋯ 22 15210.356 1 ? 15224.729 1 Quadratic Fit a+br+cr^2 75 4950–5600 6041.5 –1678.75 569.95 ⋯ 23 15210.356 1 ? 15376.831 1 Quadratic Fit a+br+cr^2 76 4950–5600 5827.4 –2602.4 1730.0 ⋯ 24 15210.356 1 ? 15400.077 1 Quadratic Fit a+br+cr^2 82 4950–5600 5884.8 –2216.1 1247.7 ⋯ 25 15210.356 1 ? 15531.752 1 Quadratic Fit a+br+cr^2 89 4950–5600 5932.8 –1868.3 695.06 ⋯ 26 15210.356 1 ? 15686.441 1 Quadratic Fit a+br+cr^2 92 5000–5600 5882.1 –2908.7 2148.0 ⋯ 27 15219.618 1 15317.843 1 Modified Exponential ae^(b/r) 58 4900–5450 5721.2 –0.20768 ⋯ ⋯ 28 15221.551 1 15376.831 1 Quadratic Fit a+br+cr^2 74 4950–5650 5888.5 –2320.8 1317.5 ⋯ 29 15221.551 1 15400.077 1 Quadratic Fit a+br+cr^2 94 4950–5750 5953.6 –1965.9 893.11 ⋯ 30 15224.729 1 15317.843 1 Modified Exponential ae^b/r 75 4950–5600 5836.2 –0.12437 ⋯ ⋯ 31 15239.712 1 15317.843 1 Modified Exponential ae^b/r 96 5000–5600 5811.4 –0.11757 ⋯ ⋯ 32 15239.712 1 15403.791 1 Hoerl Model a(b^r)(r^c) 149 5050–6900 4446.5 1.1119 –0.20245 ⋯ 33 15239.712 1 15422.276 1 Modified Hoerl Model ab^1/rr^c 157 5050–6900 4792.2 1.0139 –0.10608 ⋯ 34 15239.712 1 15469.816 1 Modified Hoerl Model ab^1/rr^c 180 5000–6900 5003.7 0.99900 –0.15005 ⋯ 35 15244.974 1 15317.843 1 Modified Exponential ae^b/r 60 4900–5550 5766.7 –0.23036 ⋯ ⋯ 36 15284.242 1 15422.276 1 Modified Hoerl Model ab^1/rr^c 132 5000–6200 4975.6 0.99943 –0.10023 ⋯ 37 15284.242 1 15469.816 1 Modified Hoerl Model ab^1/rr^c 140 4850–6250 5105.0 0.99942 –0.10170 ⋯ 38 15284.242 1 15478.482 1 +1 Logarithm Fit a+bln(r) 130 4900–6200 4897.0 –585.43 ⋯ ⋯ 39 15317.843 1 15335.383 1 Quadratic Fit a+br+cr^2 62 4800–5700 5877.0 –2324.2 951.69 ⋯ 40 15317.843 1 15376.831 1 Quadratic Fit a+br+cr^2 59 4950–5650 5806.8 –1748.5 677.34 ⋯ 41 15317.843 1 15400.077 1 Quadratic Fit a+br+cr^2 61 4950–5650 5820.4 –1236.2 188.60 ⋯ 42 15317.843 1 15403.791 1 Shifted Power Fit a(r–b)^c 63 4950–5700 5077.1 –0.20476 –0.097246 ⋯ 43 15328.367 1 15376.831 1 Quadratic Fit a+br+cr^2 92 4950–5750 5999.2 –2046.8 743.43 ⋯ 44 15328.367 1 15422.276 1 Quadratic Fit a+br+cr^2 79 4950–5650 5868.9 –1414.2 487.91 ⋯ 45 15328.367 1 15469.816 1 Shifted Power Fit a(r–b)^c 75 4800–5800 5188.3 –0.20991 –0.10800 ⋯ 46 15328.367 1 15478.482 1 +1 Modified Hoerl Model ab^1/rr^c 122 5000–5900 4950.2 0.99952 –0.081113 ⋯ 47 15363.530 1 ? 15469.816 1 Hoerl Model a(b^r)(r^c) 61 4900–5650 5305.5 0.94907 –0.033075 ⋯ 48 15373.395 1 ? 15422.276 1 Power Fit ar^b 59 4950–5450 4762.4 –0.076892 ⋯ ⋯ 49 15373.395 1 ? 15469.816 1 Quadratic Fit a+br+cr^2 67 4950–5500 5753.4 –1630.6 804.19 ⋯ 50 15373.395 1 ? 15591.490 1 Linear Fit a+br 75 4900–5400 5604.3 –1695.0 ⋯ ⋯ 51 15373.395 1 ? 15604.221 1 Linear Fit a+br 58 4900–5450 5712.6 –1637.6 ⋯ ⋯ 52 15373.395 1 ? 15748.988 Mg 1 Linear Fit a+br 66 4900–5400 5653.2 –2445.6 ⋯ ⋯ 53 15381.960 1 15422.276 1 Quadratic Fit a+br+cr^2 135 5100–6600 6901.6 –4616.3 2415.2 ⋯ 54 15387.803 1 15403.791 1 3rd degree Polynom. Fit a+br+cr^2+dr^3 111 4900–6350 6954.8 –3223.0 1889.5 –404.99 55 15400.077 1 15490.882 1 ? Quadratic Fit a+br+cr^2 145 4700–6900 4316.8 654.34 –42.104 ⋯ 56 15403.791 1 15490.882 1 ? Quadratic Fit a+br+cr^2 122 4700–6900 4602.7 434.24 –22.149 ⋯ 57 15403.791 1 15665.241 1 Modified Hoerl Model ab^1/rr^c 127 4700–6900 5087.6 1.00095 0.12909 ⋯ 58 15422.276 1 15485.454 1 Quadratic Fit a+br+cr^2 143 4800–6300 4534.9 243.68 –6.8790 ⋯ 59 15422.276 1 15514.279 1 Shifted Power Fit a(r–b)^c 114 4900–6200 4555.3 –0.86381 0.14224 ⋯ 60 15422.276 1 15674.653 1 Linear Fit a+br 170 5000–6900 4801.6 147.88 ⋯ ⋯ 61 15459.343 1+1 ? 15469.816 1 Hoerl Model a(b^r)(r^c) 157 5000–6600 4699.9 1.0884 –0.22551 ⋯ 62 15459.343 1+1 ? 15478.482 1+1 Hoerl Model a(b^r)(r^c) 137 4900–6500 4695.0 1.0074 –0.20295 ⋯ 63 15462.287 1 ? 15519.361 1 Linear Fit a+br 74 4900–5700 5748.9 –824.48 ⋯ ⋯ 64 15469.816 1 15485.454 1 Power Fit ar^b 81 4900–5600 4787.8 0.11016 ⋯ ⋯ 65 15469.816 1 15490.882 1 ? Shifted Power Fit a(r–b)^c 128 4700–6900 2834.5 –3.8666 0.37210 ⋯ 66 15469.816 1 15514.279 1 Hoerl Model a(b^r)(r^c) 95 4900–6200 5067.7 1.0068 0.083165 ⋯ 67 15469.816 1 15611.045 1 Quadratic Fit a+br+cr^2 125 4800–6500 4312.3 771.88 –60.147 ⋯ 68 15478.482 1 15490.882 1 ? Quadratic Fit a+br+cr^2 121 4700–6900 4581.2 379.54 –15.872 ⋯ 69 15490.882 1 15604.221 1 Quadratic Fit a+br+cr^2 100 4900–5800 6858.0 –4248.8 2260.6 ⋯ 70 15490.882 1 15621.654 1 Quadratic Fit a+br+cr^2 107 4900–5750 6878.4 –5478.4 3604.0 ⋯ 71 15514.279 1 15591.490 1 Hoerl Model a(b^r)(r^c) 98 4900–6150 4394.5 1.0669 –0.16310 ⋯ 72 15514.279 1 15604.221 1 Hoerl Model a(b^r)(r^c) 116 5000–6100 4371.5 1.1056 –0.19259 ⋯ 73 15514.279 1 15748.988 1 Hoerl Model a(b^r)(r^c) 135 5000–6300 3963.4 1.2149 –0.17771 ⋯ 74 15519.096 1 15665.241 1 Logarithm Fit a+bln(r) 117 4900–5750 5702.7 1195.2 ⋯ ⋯ 75 15652.871 1 15680.069 1 Logarithm Fit a+bln(r) 162 5000–6300 4914.1 1020.1 ⋯ ⋯ 76 15658.545 1 15661.898 1+1 Linear Fit a+br 103 4950–5500 5791.4 –2352.5 ⋯ ⋯ 77 15658.545 1 15906.044 1 Linear Fit a+br 104 4950–5500 5727.0 –1825.1 ⋯ ⋯ 78 15658.545 1 15912.591 1 Linear Fit a+br 108 4950–5500 5807.8 –1596.5 ⋯ ⋯ 79 15658.545 1 15920.637 1 Linear Fit a+br 92 4950–5500 5765.5 –2154.6 ⋯ ⋯ 80 15665.240 1 15677.519 1 Power Fit ar^b 137 4900–6650 4769.5 –0.24874 ⋯ ⋯ 81 15665.240 1 15686.441 1 Quadratic Fit a+br+cr^2 144 4900–6450 7531.9 –5704.6 2994.0 ⋯ 82 15665.240 1 15748.988 1 Shifted Power Fit a(r–b)^c 103 5000–6900 4460.3 0.017860 –0.15095 ⋯ 83 15673.538 1 + 15748.988 1 Modified Hoerl Model ab^1/rr^c 164 4900–6900 5152.3 1.0760 0.22764 ⋯ 84 15748.988 1 15941.848 1 Modified Hoerl Model ab^1/rr^c 137 4900–6900 6192.5 0.53702 0.056421 ⋯ 85 15941.848 1 15980.726 1 Linear Fit a+br 179 4700–6900 7359.4 –4381.17 ⋯ ⋯ 86 16584.447 1 16632.019 1 +1 Linear Fit a+br 124 4900–6100 6793.3 –2441.3 ⋯ ⋯ 87 16645.874 1 16890.414 1 Modified Hoerl Model ab^1/rr^c 125 5000–6500 4791.8 0.99914 –0.15115 ⋯
http://arxiv.org/abs/2307.04811v1
20230703182656
Non-local interference in arrival time
[ "Ali Ayatollah Rafsanjani", "MohammadJavad Kazemi", "Vahid Hosseinzadeh", "Mehdi Golshani" ]
quant-ph
[ "quant-ph", "physics.app-ph", "physics.atom-ph" ]
[email protected] Department of Physics, Sharif University of Technology, Tehran, Iran [email protected] Department of Physics, Faculty of Science, University of Qom , Qom, Iran School of Physics, Institute for Research in Fundamental Sciences (IPM), P.O. Box 19395-5531, Tehran, Iran Although position and time have different mathematical roles in quantum mechanics, with one being an operator and the other being a parameter, there is a space-time duality in quantum phenomena; Many phenomena observed in the spatial domain are also observed in the temporal domain. In this context, we propose a modified version of the two double-slit experiment using entangled atoms to observe a non-local interference in the arrival time distribution. Our numerical simulations demonstrate a complementary relationship between the one-particle and two-particle interference visibility in the arrival time distribution, which is analogous to the complementary relationship observed in the arrival position distribution. To overcome the complexities of computing the arrival time distribution in quantum mechanics, we employ a Bohmian treatment. Our approach to investigating this experiment can be applied to a wide range of phenomena, and it appears that the predicted non-local temporal interference and associated complementarity relationship are universal behaviors of entangled quantum systems that may manifest in various phenomena. PACS numbers 03.65.Ta, 05.10.Gg Non-local interference in arrival time Mehdi Golshani August 1, 2023 ====================================== § INTRODUCTION In quantum theory, several effects that were initially observed in the spatial domain have subsequently been observed in the time domain. These effects include a wide range of phenomena such as diffraction in time <cit.>, interference in time <cit.>, Anderson localization in time <cit.> and several others <cit.>. To extend this line of research, we propose a simple experimental setup that can be used to observe a non-local interference in arrival time, which is analogous to the non-local interference in arrival position observed in entangled particle systems <cit.>. The proposed experimental setup involves two double-slit arrangement in which a source emits pairs of entangled atoms toward slits <cit.>. Such entangled atoms can be produced, for example, via a four-wave mixing process in colliding Bose-Einstein condensates <cit.>. The atoms fall due to the influence of gravity and then they reach horizontal fast single-particle detectors, which record the arrival time and arrival position of the particles, as depicted in Fig. <ref>. In fact, a similar arrangement had previously been proposed for observing non-local two-particle interference in arrival position distribution <cit.>. The critical difference between our setup and theirs is that the slits in our setup are not placed at the same height. This leads to height-separated wave packets that spread in space during falling and overlap each other. Moreover, we do not consider the horizontal screens at the same height, so the particles may be detected at completely different timescales. Our study indicates that these apparently small differences lead to significant interference in the two-particle arrival time distribution, which did not exist in the previous versions of the experiment. This phenomenon is experimentally observable, thanks to the current single-atom detection technology. Our numerical study shows that the required space-time resolution in particle detection is achievable using current single-atom detectors, such as the recent delay-line detectors described in <cit.> or the detector used in <cit.>. The theoretical analysis of the proposed experiment is more complex than that of the usual double-double-slit experiment due to at least two reasons. Firstly, since the two particles are not observed simultaneously, the wave function of the two particles collapses to a single-particle wave function at the time the first particle is detected in the middle of the experiment. Secondly, the theoretical analysis of arrival time distribution is more complex than that of arrival position distribution. This is because, in the mathematical framework of orthodox quantum mechanics, position is represented by a self-adjoint operator, while time is just treated as a parameter [In fact, Pauli shows that there is no self-adjoint time operator canonically conjugate to the Hamiltonian if the Hamiltonian spectrum is discrete or has a lower bound <cit.>.]. As a result, the Born rule cannot be directly applied to the time measurements. This fact, coupled with other issues such as the quantum Zeno paradox <cit.>, leads to some ambiguities in calculating the arrival time distribution <cit.>. In fact, there is no agreed-upon method for calculating the arrival time distribution, although several different proposals have been put forth based on various interpretations of quantum theory <cit.>. Most of the arrival time distribution proposals are limited to simple cases, such as a free one-particle in one dimension, and are not yet fully extended to more complex situations, such as our double-double-slit setup. Nevertheless, the Bohmian treatment seems suitable for analyzing the proposed setup since it can be unambiguously generalized for multi-particle systems in the presence of external potentials. Thus, in this paper, we investigate the proposed experiment using recent development in the Bohmian arrival time for entangled particle systems, including detector back-effect <cit.>. The results could contribute to a better understanding of the non-local nature of quantum mechanics in the time domain. Moreover, beyond the proposed setup, our theoretical approach has potential applications in related fields such as atomic ghost imaging <cit.>, and in state tomography via time-of-flight measurements <cit.>. The structure of this paper is as follows: In Section II, we introduce the theoretical framework and the numerical results. Then, in Section III, we discuss the physical insights of the results, including signal locality and the complementarity between one-particle and two-particle interference visibility. In Section IV, we study the screen back-effect, and then we summarize our results in Section V. § THEORETICAL FRAMEWORK Bohmian mechanics, also known as pilot wave theory, is a coherent realistic version of quantum theory, which avoids the measurement problem <cit.>. In the Bohmian interpretation, in contrast to the orthodox interpretation, the wave function does not give a complete description of the quantum system. Instead, the actual trajectories of particles are taken into account as well, and this can provide a more intuitive picture of quantum phenomena <cit.>. Nonetheless, it has been proved that in the quantum equilibrium condition <cit.>, Bohmian mechanics is experimentally equivalent to orthodox quantum mechanics <cit.> insofar as the latter is unambiguous <cit.>; e.g., in usual position or momentum measurements at a specific time. In recent years, Bohmian mechanics has gained renewed interest for various reasons <cit.>. One of these reasons is the fact that Bohmian trajectories can lead to clear predictions for quantum characteristic times, such as tunneling time duration <cit.> and arrival time distribution <cit.>. Here, we investigate the proposed double-double slit setup using Bohmian tools. According to Bohmian Mechanics, the state of a two-particle system is determined by the wave function Ψ( r_1, r_2) and particles' actual positions ( R_1, R_2). The time evolution of the wave function is given by a two-particle Schrödinger equation iħ∂/∂ tΨ_t( r_1, r_2)=∑_i=1,2ħ^2/2m_i∇^2_iΨ_t+V_i( r_i)Ψ_t, which in the proposed setup V_i( r_i)=-m_i g. r_i and g represents gravitational filed. The particle dynamics is given by a first-order differential equation in configuration space, the "guidance equation", d/dt R_i(t)= v_i^Ψ_t( R_1(t), R_2(t)), where v_i^Ψ_t are the velocity fields associated with the wave function Ψ_t; i.e. v_i^Ψ_t=(ħ/m_i)(∇_iΨ_t/Ψ_t) <cit.>. When the particle 1, for example, is detected at time t=t_c, the two-particle wave function collapses effectively to a one-particle wave function, i.e. as Ψ_t_c( r_1, r_2)→ψ_t_c( r_2), where <cit.>, ψ_t_c( r_2)=Ψ_t_c( R_1(t_c), r_2), which is known as the “conditional wave function” in Bohmian formalism <cit.>. For t>t_c, the time evolution of the wave function is given by following the one-particle Schrödinger equation iħ∂/∂ tψ_t( r_2)=-ħ^2/2m_2∇^2_2ψ_t( r_2)+V_2( r_2)ψ_t( r_2), and the remaining particle motion is determined by the associated one-particle guidance equations d/dt R_2(t)= v_2^ψ_t( R_2(t)), It is important to note that, in general, a conditional wave function does not obey the Schrodinger equation <cit.>. However, in a measurement situation, the interaction of the detected particle with the environment (including the detection screen) cancels any entanglement between undetected and detected particles, due to the decoherence process <cit.>. Therefore, in this situation, after the measurement process, the conditional wave function represents the "effective wave function" of the undetected particle <cit.>, which satisfies the one-particle Schrodinger equation <cit.>. We focus our study on the propagation of the wave function from the slits to the detection screens. Thus, we may consider the initial wave function as follows <cit.>: Ψ_t_0( r_1, r_2)=N[(1-η/2)Ψ_t_0^(×)( r_1, r_2)+(1+η/2)Ψ_t_0^(||)( r_1, r_2)], in which N is a normalization constant, Ψ_t_0^(×)( r_1, r_2) = [g_u^+( r_1)g_d^-( r_2)+g_d^+( r_1)g_u^-( r_2)]+ 1↔ 2, Ψ_t_0^(||)( r_1, r_2) = [g_u^+( r_1)g_u^-( r_2)+g_d^+( r_1)g_d^-( r_2)] + 1↔ 2, and g_u^±(x,y) = G(x;σ_x,± l_x, ± u_x)G(y;σ_y, +l_y, +u_y), g_d^±(x,y) = G(x;σ_x,± l_x, ± u_x)G(y;σ_y, -l_y, -u_y), where G is a Gaussian wave function G(x;σ, l, u)=Ne^-(x-l)^2/4 σ^2+ i m u (x-l)/ħ. The Gaussian-type initial wave function is a minimal toy model which is commonly used in the literature (e.g., see <cit.>). The parameter η controls the entanglement of the wave function. It is easy to see that η=0 leads to a separable state, whereas |η|=1 leads to a maximally entangled state; for η=+1 the state is maximally correlated, and for η=-1 the state is maximally anticorrelated <cit.>. We also have symmetrized the wave function, as we have considered the particles as indistinguishable Bosons. It is worth noting that even without using the slits, this kind of initial wave function could be produced and reliably controlled using optical manipulation <cit.> of an entangled state generated from colliding Bose-Einstein condensates <cit.>. Furthermore, since the free two-particle Hamiltonian is separable, the time evolution of this wave function can be found from eq.(<ref>) analytically as Ψ_t( r_1, r_2)=N[(1-η/2)Ψ_t^(×)( r_1, r_2)+(1+η/2)Ψ_t^(||)( r_1, r_2)] in which functions Ψ_t^(×)( r_1, r_2) and Ψ_t^(||)( r_1, r_2) are constructed out of time dependent Gaussian wave functions G_t as in (<ref>) where G_t(x;σ, l, u, a) = (2 π s_t^2)^-1/4 e^i ma/ħl te^[-(x-l-u t-a t^2/2)^2/4σ s_t] × e^im/ħ[(u-at).(x-l-u t/2)-a^2 t^3/6] and s_t = σ (1+i tħ/2 m σ^2) [The Gaussian solution of the Schrödinger equation for a particle under uniform force was first introduced by de Broglie <cit.> and rephrased by Holland in <cit.>, but according to our investigation, none of them satisfy the Schrödinger's equation exactly: To satisfying the wave equation, an additional time-dependent phase, i.e. our first exponential term in (<ref>), is needed.]. Using this wave function, the detection time and position of the first observed particle are uniquely determined by solving Eq. <ref>. Then, using Eq. <ref>, we can find the trajectories of the remaining particles after the first particle detection. Using trajectories, we can find the joint detection data distribution in (t_L,x_L;t_R,x_R) space where t_L/R is the detection time, and x_L/R is the detection position on the left/right screen. The probability density behind this distribution can be formally written as P(t_L,x_L;t_R,x_R)=∫ d R^0 |Ψ_0( R^0)|^2 ×∏_i=L,Rδ(t_i-T_i( R^0))δ(x_i-X_i( R^0)), where T_L,R( R^0_1, R^0_2) and X_L,R( R^0_1, R^0_2) are the arrival time and position of the particle with initial condition ( R^0_1, R^0_2) to the left and right screen, respectively. Note, how the above joint distribution and, therefore, any marginal distribution out of it, is sensitive to the Bohmian dynamics through functions T and X and also to the Born rule by |Ψ_0( R^0)|^2. The joint two-particle arrival time distribution is then defined as, Π(t_L, t_R) = ∫∫ P(t_L, x_L; t_R, x_R) dx_L dx_R. The right and left marginal arrival time distributions are also defined correspondingly as, Π_L(t_L) = ∫ P(t_L, t_R) dt_R Π_R(t_R) = ∫ P(t_L, t_R) dt_L. The trajectories of the particles and the resulting arrival time distributions are numerically computed for an ensemble of particles whose initial positions are sampled from |Ψ|^2, and the corresponding results are described in the next section. § RESULTS In numerical studies of this work, the parameters of the setup have been chosen as σ_x=σ_y=10^-6 m, u_x=20 m/s, u_y=0, l_x = 5×10^-3 m, l_y =10^-5 m. These values are in agreement with the proposed setup in reference <cit.> in which colliding helium-4 atoms have been considered for producing an initial entangled state <cit.>. However, we consider heavier atom pairs as well, which lead to a more visible interference pattern for some values of parameters and the locations of the screens—See Fig. <ref>. In Fig. <ref>, for Helium atoms, some of Bohmian trajectories are plotted. In this figure, the cyan trajectories are without considering the collapse effect and the black ones are with that. One can see that after the detection of the left particle, the right particle starts to deviate from the cyan trajectories as the conditional wave function now guides it. It is worth noticing that, the ensemble of trajectories can be experimentally reconstructed using weak measurement techniques <cit.>, which can be used as a test of our result. In Figs. <ref> and <ref>, the joint arrival time distribution Π(t_L, t_R) and also the right and left marginal distributions Π_R(t_R) and Π_L(t_L) are plotted, for cesium atoms in two cases: with collapse effect in black and without that in dark-cyan. Fig. <ref> shows the one-particle and two-particle temporal interference pattern for fixed screens' locations and different values of the entanglement parameter η. The marginal distributions are generated using 10^6 trajectories. However, for clarity, only 10^4 points are shown in the joint scatter plots. As mentioned, maximum entanglement occurs when |η|=1 and when η=0, the particles are entirely uncorrelated. As one can see in Fig. <ref>, the visibility of the joint and marginal distributions have an inverse relation. This behavior represents a temporal counterpart to the complementarity between the one-particle and two-particle interference visibilities of the arrival position pattern, which can be observed in the conventional double-slit experimental configuration <cit.>. Moreover, the correction of the two-particle distributions due to the collapse effect decreases with the turning off of the entanglement, and in η=0, interference patterns with and without correction are the same. In fact, in this case, we have Π(t_L, t_R)=Π_R(t_R)Π_L(t_L). In Fig. <ref>, the one-particle and two-particle temporal interference patterns for different positions of the left screen is depicted, while the entanglement parameter fix to η=-1. The difference between patterns is obvious in the cases without collapse effect consideration (dark-cyan plots) and by including the effect (black plots). The closer the left screen is to the slits, the earlier the wave function reduction occurs, and its effect is more visible on the joint distribution. Note that, despite the fact that the collapse effect changes particles' trajectories and resulting joint distribution, this effect did not change the one-particle distribution patterns. This shows the establishment of the no-signaling condition, despite the manifest non-local Bohmian dynamics: The right marginal arrival time distribution, as a local observable quantity, turns out to be independent of whether there is any screen on the left or not, and if there is any, it is not sensitive to the location of that detection screen. Note that this fact is not trivial because the well-known no-signaling theorem is proved for observable distributions provided by a POVM. However, in the general case, the intrinsic Bohmian arrival time distribution cannot be described by a POVM, at least when the detector back-effect is ignored <cit.>. In the next section, we discuss more on the detector back-effect. § DETECTOR BACK-EFFECT The arrival distributions computed in the previous sections should be considered as ideal or intrinsic distributions <cit.>, since the influence of the detector is ignored in our theoretical manner. Such an idealization is commonly used in most previous studies of Bohmian arrival time distribution (for example, see <cit.>), and seems more or less to be satisfactory in many applications including the double-slit experiment <cit.>. Nonetheless, in principle, the presence of the detector could modify the wave function evolution, even before the particle detection <cit.>. This is called the detector back-effect. To have a more thorough investigation of detection statistics, we should consider this effect. However, due to some fundamental problems, such as measurement problem and the quantum Zeno effect <cit.>, a complete investigation of the detector effects is problematic at the fundamental level, and it is less obvious how to model an ideal detector <cit.>. Nonetheless, some phenomenological non-equivalent models are proposed, such as the generalized Feynman path integral approach in the presence of an absorbing boundary <cit.>, Schrödinger equation with a complex potential <cit.>, Schrödinger equation with absorbing (or complex Robin) boundary condition <cit.>, and so on. In this section, we merely consider the absorbing boundary rule (ABR), which is compatible with the Bohmian picture and recently developed for multi-entangled particle systems <cit.>. The results of other approaches may not be the same <cit.>—See also Appendix <ref>. So a detailed study of the differences is an interesting topic, which is left for future works. §.§ Absorbing Boundary Rule According to the ABR, the particle wave function ψ evolves according to the free Schrödinger equation, while the presence of a detection screen is modeled by imposing the following boundary conditions on the Detection screen, x∈𝕊, n·∇ψ=iκψ, where κ>0 is a constant characterizing the type of detector, in which ħκ/m represents the momentum that the detector is most sensitive to. This boundary condition ensures that waves with wave number κ are completely absorbed while waves with other wave numbers are partly absorbed and partly reflected <cit.>. Note that, The Hille-Yosida theorem implies that the Schrödinger equation with above boundary condition has a unique solution for every initial wave function defined on one side of the boundary. The boundary condition (<ref>), implies that Bohmian trajectories can cross the boundary 𝕊 only outwards and so there are no multi-crossing trajectories. In the Bohmian picture, a detector clicks when and where the Bohmian particle reaches detection surface 𝕊. In fact, it is a description of a “hard” detector, i.e., one that detects a particle immediately when it arrives at the surface 𝕊. Nonetheless, it should be noted that the boundary absorbs the particle but not completely the wave. The wave packet moving towards the detector may not be entirely absorbed, but rather partially reflected <cit.>. The application of absorbing boundary condition in arrival time problem was first proposed by Werner <cit.>, and recently it is re-derived and generalized by other authors using various methods <cit.>. Especially, it is recently shown that in a suitable (non-obvious) limit, the imaginary potential approach yields the distribution of detection time and position in agreement with the absorbing boundary rule <cit.>. Moreover, Dubey, Bernardin, and Dhar <cit.> have shown that the ABR can be obtained in a limit similar but not identical to that considered in the quantum Zeno effect, involving repeated quantum measurements. Recently the natural extension of the absorbing boundary rule to the n-particle case is discussed by Tumulka <cit.>. The key element of this extension is that, upon a detection event, the wave function gets collapsed by inserting the detected position, at the time of detection, into the wave function, thus yielding a wave function of (n-1) particles. We use this formalism for the investigation of detector back-effect in our double-double-slit setup. §.§ Numerical Results In our experimental setup, due to the influence of gravity, the reflected portions of the wave packets return to the detector screen, while some of them are absorbed and some are reflected again. This cycle of absorption and reflection is repeated continuously. The associated survival probabilities are plotted in Fig. <ref> for some values of detector parameter, κ=κ_0,2κ_0,3κ_0,κ_0/3, where the κ_0 is defined using classical estimation of particles momentum at the screen as κ_0=m √(2gY_R)/ħ. The corresponding Bohmian trajectories and arrival time distributions are presented in Fig. <ref>. As one can see in Figs. <ref> and <ref>, when κ=κ_0, most of the trajectories are absorbed, and approximately none of them are reflected, which is similar to the case when the detector back effect is ignored. These results show that, at least for the chosen parameters, when we use a proper detector with κ=κ_0, the ideal arrival time distribution computed in the previous section, without considering the detector back effect, produces acceptable results. However, in general, Figs. <ref> and <ref> show that the detector back effect cannot be ignored and it leads to new phenomena: i.e., a "fractal" of the interference pattern. § SUMMARY AND OUTLOOK In this work, we proposed a double-double-slit setup to observe non-local interference in the arrival time of entangled particle pairs. Our numerical study shows a complementarity between one-particle visibility and two-particle visibility in the arrival time interference pattern, which is very similar to the complementarity observed for the arrival position interference pattern <cit.>. Moreover, the results of our study indicate that arrival time correlations can serve as an entanglement witness, thereby suggesting the potential use of temporal observables for tasks related to quantum information processing <cit.>. As noted in the introduction, the theoretical analysis of the suggested experiment is more complex than that of a typical double-slit experiment due to several connected fundamental problems, including the arrival time problem, the measurement problem, and the quantum Zeno paradox. We use a Bohmian treatment to circumvent these problems. Our approach can be used for a more accurate investigation of various phenomena beyond the double-double-slit experiment, such as ghost imaging <cit.>, Hong–Ou–Mandel Effect <cit.>, temporal state tomography <cit.>, and so on, which are usually analyzed in a semi-classical approximation (see Appendix <ref>). It is worth noting that, based on other interpretations of quantum theory, there are other non-equivalent approaches that can, in principle, be used to investigate the suggested experiment <cit.>. However, these approaches need to be extended for entangled particle systems first. Comparing the results obtained by these various approaches can be used to test the foundations of quantum theory. Specifically, it appears that measuring the arrival time correlations in entangled particle systems can sharply distinguish between different approaches to the arrival time problem <cit.>. A more detailed investigation of this subject is left for future works. § APPENDIX: SEMI-CLASSICAL ANALYSIS Despite the absence of an agreed-upon fundamental approach for arrival time computation, a semiclassical analysis is routinely used to analyze observed data. This approach is often sufficient, especially when particle detection is done in the far-field regime <cit.>. In this analysis, it is assumed that particles move along classical trajectories, and the arrival time distribution is computed using the quantum initial momentum distribution <cit.>. It is important to compare the semiclassical analysis with our Bohmian result. To this end, we re-derive the semiclassical approximation and extend it to the near-field regime, using the Wigner phase-space formalism. The Wigner distribution function is defined as follows <cit.>: f^W(r,p)=1/√(2πħ)∫Ψ^*(r-y)Ψ(r+y) e^2ip.yd^4y. where r=(r_1,r_2), y=(y_1,y_2) and p=(p_1,p_2). The marginals of this distribution function are quantum position and momentum distributions <cit.>. The Schrodinger equation leads to the following quantum Liouville equation for Wigner distribution function <cit.>, ∂ f^W_t/∂ t+∑_i=1,2p_i/m.∇_x_if+∇_p_i.(fF_i^W) =0 By comparing this equation with the classical Liouville equation, the Wigner trajectories are defined as follows <cit.> d^2x_i/dt^2=F_i^W/m These trajectories are used to understand various quantum phenomena <cit.>[Nonetheless, it is worth noting that, since the Wigner function is not positive-definite in general <cit.>, it cannot be interpreted as a realistic phase-space distribution function. Consequently, in contrast to Bohmian trajectories, the Wigner trajectories cannot be reliably utilized to visualize the underlying quantum motion in a completely consistent manner.]. For a constant external gravitational force, this equation of motion leads to the trajectories exactly the same as classical trajectories. Moreover, due to the non-overlapping of the initial Gaussian wave packets, the Wigner formalism leads to the following initial (positive) phase-space distribution. f^W_t_0(r_1,r_2;p_1,p_2)≈ |Ψ_t_0(r_1,r_2)|^2 |Ψ̃_t_0(p_1,p_2)|^2, where Ψ̃_t_0(p_1,p_2) is wave function in momentum space. Using the initial phase space distribution and classical (or Wigner) trajectories, we compute the arrival time distribution in our setup. In Figure <ref>, the semi-classical joint arrival time distribution is compared with the corresponding Bohmian result. As can be seen in this figure, when the left screen is in the near-field or far-field, the semi-classical distributions are more or less the same as the Bohmian distributions (see the panels (a) and (b) of Fig. <ref> for near-field, and the panels (g) and (h) for far-field). However, in the middle-field, i.e., the panels (c)-(f) of Fig. <ref>, the Bohmian distribution is distinguishable from the semi-classical approximation. This determines an important window in the parameter space of this experiment, which can be used to test Bohmian predictions and other arrival time proposals in a strict manner.
http://arxiv.org/abs/2307.00990v1
20230703131608
NOMA-Assisted Grant-Free Transmission: How to Design Pre-Configured SNR Levels?
[ "Zhiguo Ding", "Robert Schober", "Bayan Sharif", "and H. Vincent Poor" ]
cs.IT
[ "cs.IT", "eess.SP", "math.IT" ]
NOMA-Assisted Grant-Free Transmission: How to Design Pre-Configured SNR Levels? Zhiguo Ding, Robert Schober, Bayan Sharif, and H. Vincent Poor Z. Ding and Bayan Sharif are with Department of Electrical Engineering and Computer Science, Khalifa University, Abu Dhabi, UAE. Z. Ding is also with Department of Electrical and Electronic Engineering, University of Manchester, Manchester, UK. R. Schober is with the Institute for Digital Communications, Friedrich-Alexander-University Erlangen-Nurnberg (FAU), Germany. H. V. Poor is with the Department of Electrical and Computer Engineering, Princeton University, Princeton, NJ 08544, USA. August 1, 2023 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================ An effective way to realize non-orthogonal multiple access (NOMA) assisted grant-free transmission is to first create multiple receive signal-to-noise ratio (SNR) levels and then serve multiple grant-free users by employing these SNR levels as bandwidth resources. These SNR levels need to be pre-configured prior to the grant-free transmission and have great impact on the performance of grant-free networks. The aim of this letter is to illustrate different designs for configuring the SNR levels and investigate their impact on the performance of grant-free transmission, where age-of-information is used as the performance metric. The presented analytical and simulation results demonstrate the performance gain achieved by NOMA over orthogonal multiple access, and also reveal the relative merits of the considered designs for pre-configured SNR levels. Grant-free transmission, non-orthogonal multiple access (NOMA), age of information (AoI). § INTRODUCTION Grant-free transmission is a crucial feature of the sixth-generation (6G) network to support various important services, including ultra-massive machine type communications (umMTC) and enhanced ultra reliable and low latency communications (euRLLC) <cit.>. Non-orthogonal multiple access (NOMA) has been recognized as a promising enabling technique to support grant-free transmission, and there are different realizations of NOMA-assisted grant-free transmission. For example, cognitive-radio inspired NOMA can be used to ensure that the bandwidth resources, which normally would be solely occupied by grant-based users, are used to admit grant-free users <cit.>. Power-domain NOMA has also been shown effective to realize grant-free transmission, where successive interference cancellation (SIC) is carried out dynamically according to the users' channel conditions <cit.>. The aim of this letter is to focus on the application of NOMA assisted random access for grant-free transmission <cit.>. Unlike the other aforementioned forms of NOMA, NOMA assisted random access ensures that grant-free transmission can be supported without requiring the base station to have access to the users' channel state information (CSI). The key idea of NOMA assisted random access is to first create multiple receive signal-to-noise ratio (SNR) levels and then serve users by employing these SNR levels as bandwidth resources. These SNR levels need to be pre-configured prior to the grant-free transmission and have great impact on the performance of grant-free networks. In the literature, there exist two SNR-level designs, termed Designs I and II, respectively. Design I is based on a pessimistic approach and is to ensure that a user's signal can be still decoded successfully, even if all the remaining users choose the SNR level which contributes the most interference, an unlikely scenario in practice <cit.>. Design II is based on an optimistic approach, and assumes that each SNR level is to be selected by at most one user <cit.>. The advantage of Design II over Design I is that the SNR levels of Design II can be chosen much smaller than those of Design I, and hence are more affordable to the users. The advantage of Design I over Design II is that a collision at one SIC stage does not cause all the SIC stages to fail. The aim of this letter is to study the impact of the two SNR-level designs on grant-free transmission, where the age-of-information (AoI) is used as the performance metric <cit.>. As the AoI achieved by Design I has been analyzed in <cit.>, this letter focuses on Design II, where a closed-form expression for the AoI achieved by NOMA with Design II and its high SNR approximation are obtained. The presented analytical and simulation results reveal the performance gain achieved by NOMA over orthogonal multiple access (OMA). Furthermore, compared to Design I, Design II is shown to be beneficial for reducing the AoI at low SNR, but suffers a performance loss at high SNR. § SYSTEM MODEL Consider a multi-user uplink network, where M users, denoted by U_m, communicate with the same base station in a grant-free manner. In particular, each user generates its update at the beginning of a time frame which consists of N time slots having duration T seconds each. The users compete for channel access to deliver their updates to the base station, where the probability of a transmission attempt, denoted by ℙ_ TX, is assumed to be identical for all users. With OMA, only a single user can be served in each time slot, whereas the benefit of using NOMA is that multiple users can be served simultaneously. Similar to <cit.>, NOMA assisted random access is adopted for the implementation of NOMA. In particular, the base station pre-configures K receive SNR levels, denoted by P_k, where each user randomly chooses one of the K SNR levels for its transmission with equal probability ℙ_K. If U_1 chooses P_k, U_1 needs to scale its transmit signal by P_k/|h_1^j,n|^2, where h_m^j,n denotes U_m's channel gain in the j-th slot of the n-th frame. If the chosen SNR level is not feasible, i.e., P_k/|h_1^j,n|^2>P, the user simply keeps silent, where P denotes the user's transmit power budget. Each user is assumed to have access to its own CSI only, and the users' channels are assumed to be independent and identically complex Gaussian distributed with zero mean and unit variance. §.§ Two Designs to Configure the SNR Levels, P_k Recall that the SNR levels are configured prior to transmission, which means that the SNR levels cannot be related to the users' instantaneous channel conditions, but are solely determined by the users' target data rates. In the literature, there exist two SNR-level designs, as explained in the following. For illustrative purposes, assume that P_1≥⋯≥ P_K, i.e., SIC is carried out by decoding the user using P_i before decoding the one using P_j, i<j. For the trivial case where M<K, only the M smallest SNR levels are used. §.§.§ Design I In <cit.>, the receive SNR levels are configured as follows: log(1+P_k/1+(M-1)P_k+1)=R, 1≤ k ≤ K-1, and log(1+P_K)=R, i.e., P_K=2^R-1 and P_k=(2^R-1)(1+(M-1)P_k+1), where the users are assumed to have the same target data rate, denoted by R. The rationale behind this design is to ensure successful SIC in the worst case, where one user chooses P_k and all the remaining users choose the SNR level which contributes the most interference, i.e., P_k+1. This is a pessimistic assumption since not all the remaining users make a transmission attempt, and it is unlikely for all users to choose the same SNR level. §.§.§ Design II Alternatively, the SNR levels can also be configured as follows <cit.>[A more sophisticated design is to introduce an auxiliary parameter, η, and integrate the two designs shown in (<ref>) and (<ref>) together, i.e., log(1+P_k/1+η P_k+1)=R, where an interesting direction for future research is to optimize η for AoI reduction. ]: log(1+P_k/1+P_k+1)=R, 1≤ k ≤ K-1, and log(1+P_K)=R, i.e., P_K=2^R-1 and P_k=(2^R-1)(1+P_k+1). Design II has the drawback that one collision in the i-th SIC stage can cause a failure to the earlier stages, i.e., the j-th SIC stage, j<i, since there is more than one interference source for P_i. However, Design II offers the benefit that its SNR levels are less demanding than those for Design I, as can be seen from Table <ref>. Recall that a user has to remain silent if its chosen SNR level is not feasible. Because the SNR levels of Design I are large, these SNR levels cannot be fully used by the users, and hence the number of supported users is smaller than that for Design II. §.§ AoI of Grant-Free Transmission For grant-free transmission, the AoI is an important metric to measure how frequently a user can update the base station. In particular, an effective grant-free transmission scheme needs to ensure that the collisions among the users can be effectively reduced, and the base station can be frequently updated, which makes the AoI an ideal metric. Without loss of generality, U_1 is focused on as the tagged user, and its average AoI is defined as follows <cit.>: Δ̅ = W→∞lim1/W∫^W_0Δ(t)dt, where Δ(t) denotes the time elapsed since the last successfully delivered update. As the AoI achieved by OMA and NOMA with Design I has been analyzed in <cit.>, the AoI achieved by Design II will be focused on in this letter. § AOI PERFORMANCE ANALYSIS To facilitate the AoI analysis, denote the time internal between the (n-1)-th and the n-th successful updates by Y_n, and denote the time for the n-th successful update to be delivered to the base station by S_n, n=1, 2, ⋯. By using the definition of the AoI, Δ̅ can be expressed as follows <cit.>: Δ̅ = ℰ{S_n-1Y_n}/ℰ{Y_n}+ℰ{Y_n^2}/2ℰ{Y_n}, where ℰ{·} denotes the expectation operation. With some algebraic manipulations, ℰ{S_n-1Y_n} =ℰ{S_n}ℰ{Y_n} -ℰ{S_n ^2 }+ℰ{S_n} ^2, ℰ{Y_n} = TN/1- 𝐬_0^T𝐏_M^N1_M× 1 and ℰ{Y_n^2} = N^2T^2 1+ 𝐬_0^T𝐏_M^N1_M× 1/(1- 𝐬_0^T𝐏_M^N1_M× 1)^2 + 2ℰ{S_n^2 } - 2 ℰ{S_n}^2, ℰ{S_n} = T∑^N_l=1l 𝐬_0^T𝐏_M^l-1𝐩/1- 𝐬_0^T𝐏_M^N1_M× 1, ℰ{S_n^2} =T^2 ∑^N_l=1l^2 𝐬_0^T𝐏_M^l-1𝐩/1- 𝐬_0^T𝐏_M^N1_M× 1, 𝐬_0=[ 1 0_1× (M-1) ]^T, 0_m× n denotes an all-zero m× n matrix, 1_m× n is an all-one m× n matrix, 𝐩=1_M× 1 -𝐏_M1_M× 1, and 𝐏_M is an M× M matrix to be explained later. Recall that the considered access competition among the users can be modelled as a Markov chain with M+1 states, denoted by s_j, 0≤ j≤ M. In particular, state s_j, 0≤ j ≤ M-1, denotes that j users succeed in updating the base station, and the tagged user is not one of the successful user. s_M denotes that the tagged user succeeds in updating the base station. The state transition probability, denoted by P_j,j+i, is the probability from s_j to s_j+i, i≥ 0 and j+i≤ M-1. 𝐏_M is an all zero matrix except its element in the (j+1)-th row and (j+i+1)-th column is P_j,j+i. The calculation of the state transition probability is directly determined by the transmission strategy. The following lemma provides P_j,j+i achieved by NOMA with Design II. The state transition probability, P_j,j+i, achieved by NOMA with Design II is given by P_j,j+i = ∑^M-j_m=i+1M-j mℙ_ TX^m (1- ℙ_ TX) ^M-j-m ×M-j-i/M-jm ⋯ (m-i+1)ℙ_K^mγ_i γ_m,i +M-j iℙ_ TX^i (1- ℙ_ TX) ^M-j-iM-j-i/M-ji! ℙ_K^iγ_i , for 0≤ j≤ M-2 and 1≤ i ≤min{K,M-1-j}, and P_j,j = 1-∑^min{K,M-j}_i=1P̅_j,j+i, for 0≤ j≤ M-1, where P̅_j,j+i = ∑^M-j_m=i+1M-j mℙ_ TX^m (1- ℙ_ TX) ^M-j-m × m ⋯ (m-i+1)ℙ_K^mγ_iγ_m,i +M-j iℙ_ TX^i (1- ℙ_ TX) ^M-j-i i!ℙ_K^iγ_i γ_i= k_1+⋯+k_K=i max{ k_1, ⋯,k_K}=1∑(1-ℙ_e,1)^k_1⋯(1-ℙ_e,K)^k_K, γ_m,i= k_1+⋯+k_K=m-i ∑(m-i)!/k_1! ⋯ k_K!ℙ_e,1 ^k_1⋯ℙ_e,K ^k_K, and ℙ_e,k=1-e^-P_k/P. See Appendix <ref>. At high SNR, i.e., P→∞, ℙ(P_k/|h_1^j,n|^2>P)→ 0, i.e., all SNR levels become affordable to the users, and hence the expressions for the state transition probability can be simplified as shown in the following corollary. At high SNR, P_j,j+i can be approximated as follows: P_j,j+i≈ (M-j-1)!/(M-j-i-1)!ℙ_ TX^i (1- ℙ_ TX) ^M-j-iℙ_K^iγ̅_i , and P̅_j,j+i≈ (M-j)!/(M-j-i)!ℙ_ TX^i (1- ℙ_ TX) ^M-j-iℙ_K^iγ̅_i, where γ̅_i= k_1+⋯+k_K=i max{ k_1, ⋯,k_K}=1∑ 1. Remark 1: The benefit of using NOMA for AoI reduction can be illustrated based on Corollary <ref>. For the special case of i=1, the high SNR approximation of P_j,j+1 can be expressed as follows: P_j,j+1≈ (M-j-1)ℙ_ TX^i (1- ℙ_ TX) ^M-j-1ℙ_K γ̅_i . If ℙ_K =1/K, P_j,j+1 can be simplified as follows: P_j,j+1≈ (M-j-1)ℙ_ TX^i (1- ℙ_ TX) ^M-j-1, which is exactly the same as that of the OMA case shown in <cit.>. However, for OMA, P_j,j+i=0, i>1, whereas for NOMA, P_j,j+i>0, 1<i≤ K, which means that with NOMA more users can be served, and hence the AoI of NOMA will be smaller than that of OMA. § SIMULATION RESULTS In Fig. <ref>, the AoI of the considered grant-free schemes is shown as a function of the number of users, M. As can be seen from the figure, the use of NOMA transmission can significantly reduce the AoI compared to the OMA case, particularly when there is a large number of users. This ability to support massive connectivity is valuable for umMTC which is the key use case of 6G networks. The figure also demonstrates the accuracy of the analytical results developed in Lemma <ref>. In addition, Fig. <ref> shows that at low SNR, the use of Design II yields a significant performance gain over Design I, particularly for large M. However, at high SNR, the use of Design I is more beneficial, as demonstrated in Fig. <ref>. An interesting observation from Fig. <ref> is that for the special case of M=5 and P=30 dB, the use of K=2 SNR levels yields a better performance than K=4. This is due to the fact that the used choice ℙ_ TX=min{K/M,1} is not optimal, as can be explained by using Corollary <ref>, which shows that P_j,j+i is a function of (1- ℙ_ TX) ^M-1 at high SNR. For the special case of M=5 and K=4, ℙ_ TX=4/5, and hence (1- ℙ_ TX) ^M-1 can be very small, which causes the AoI of K=4 to be larger than that of K=2. We note that for large M, the performance gain of NOMA over OMA can be always improved by increasing K, i.e., using more SNR levels, as shown in Fig. <ref>. In order to better illustrate the impact of the transmit SNR, P, on the performance of the two considered NOMA designs, Fig. <ref> shows the AoI as a function of P. Fig. <ref> shows that regardless of the choices of the transmit SNR, the AoI of NOMA is always smaller than that of OMA, which is consistent with the observations from Fig. <ref>. In addition, Fig. <ref> also confirms the conclusion that Design II can outperform Design I at low SNR, but suffers a performance loss at high SNR. The reason for Design II to outperform Design I at low SNR is that the SNR levels required by Design I are more demanding than those of Design II, and hence may not be affordable to the users at low SNR, i.e., P_k/|h_1^j,n|^2>P. The reason for Design I to outperform Design II at high SNR is that, at high SNR, all the levels of the two designs become affordable to the users, and transmission failures are mainly caused by collisions, where unlike Design II, Design I ensures that a collision at the i-th SIC stage does not cause any failure to the j-th stage, j<i. An interesting observation from Fig. <ref> is that the AoI achieved by Design II may even get degraded by increasing SNR. This is because at low SNR, some users may find that their chosen SNR levels are not affordable, which reduces the number of active users and hence is helpful to reduce the AoI by avoiding collisions. As discussed previously, the transmission attempt probability, ℙ_ TX, is an important parameter for grant-free transmission. In Fig. <ref>, we show the AoI achieved by the considered schemes for different choices of ℙ_ TX. In particular, we consider the adaptive choice, ℙ_ TX=min{K/M,1} for NOMA and ℙ_ TX=1/M-j for OMA, where j is the number of users which have successfully delivered their updates to the base station. With the fixed choice, ℙ_ TX is set as 0.05. As can be seen from the figure, with a given choice of ℙ_ TX, the AoI achieved by NOMA is worse than that of OMA for the special case of low SNR and small M. Nevertheless, the performance gain of NOMA over OMA is still significant in general. In addition, the figure also shows that the use of the adaptive choice of ℙ_ TX yields a better performance than that of a fixed ℙ_ TX. § CONCLUSION In this letter, the application of NOMA-assisted random access to grant-free transmission has been studied, where the two SNR-level designs and their impact on grant-free networks have been investigated based on the AoI. The presented analytical and simulation results show that the two NOMA designs outperform OMA, and exhibit different behaviours in the low and high SNR regimes. § PROOF FOR LEMMA <REF> Suppose that among the M users, j users have already successfully delivered their updates to the base station, but the tagged user, i.e., U_1, still has not succeeded. Define E_j,j+i, 0≤ i ≤ K, as the event, that i additional users succeed, but the tagged user, U_1, is not among the i users. The key to studying the AoI is to analyze the state transition probabilities, P_j,j+i≜ℙ( E_j,j+i ), 0≤ i ≤ K. The expressions for P_j,j+i, 0≤ i ≤ K, can be obtained from the following probabilities, P̅_j,j+i≜ℙ( E̅_j,j+i ), 1≤ i ≤ K, where E̅_j,j+i denotes the event, in which among the M-j users, there are i additional users which succeed in updating their base station. Unlike for E_j,j+i, U_1 can be one of the i successful users for E̅_j,j+i. We note that not all the remaining M-j users make a transmission attempt. By using the transmission attempt probability, ℙ_ TX, P̅_j,j+i can be expressed as follows: P̅_j,j+i = ∑^M-j_m=i+1M-j mℙ_ TX^m (1- ℙ_ TX) ^M-j-mP̅_m,i, where P̅_m,i denotes the probability of the event, that among m active users, i.e., m users making a transmission attempt, i users succeed in updating their base station. Without loss of generality, assume that U_k, 1≤ k ≤ m, are the m active users. In the following, we focus on a particular event, denoted by E_i, in which among U_k, 1≤ k ≤ m, U_1, …, U_i are the i successful users. Therefore, P̅_m,i can be expressed as follows: P̅_m,i = m iℙ(E_i), where m i is the number of events which have the same probability as E_i. If Design I is used, the detection at the n-th SIC stage is affected by the l-th stage, l<n, only, and a collision which happens at a later stage, i.e., the p-th stage, p>n, has no impact. However, with Design II, a collision will cause all SIC stages to fail, which makes the performance analysis for Design II significantly different from the one shown in <cit.>. Considering the difference between the two designs, the fact that U_k, 1≤ k ≤ m, are the m active users, but only U_n, 1≤ n ≤ i, are successful has the following two implications: * There is no collision between U_n, 1≤ n ≤ i, i.e., the i successful users choose different SNR levels. In addition, each user finds its chosen SNR level feasible. * Each of the failed users, U_n, i+1≤ n ≤ m, finds out that its chosen SNR level is not feasible. The second implication is the key to simplifying the performance analysis, and can be explained as follows. Without loss of generality, assume that U_i+1 chooses P_k, and finds that P_k is feasible. Because U_i+1 is one of the active users, it will definitely make an attempt for transmission. Therefore, the only reason to cause this user's transmission to fail is a collision, i.e., another active user chooses the same SNR level as U_i+1. This collision at P_k will cause a failure at the k-th SIC stage, as well as the following SIC stages. More importantly, the collision at P_k can also lead to a failure of the early SIC stages, due to the additional interference caused by the two simultaneous transmissions at P_k. Define E_i,1 as the event where U_n, 1≤ n ≤ i, successfully deliver their updates to the base station, and E_i,2 as the event where U_n, i+1≤ n ≤ m, fail to deliver their updates to the base station. The two aforementioned implications are also helpful in establishing the independence between the two events, E_i,1 and E_i,2, which leads to the following expression: ℙ(E_i) = ℙ(E_i,1) ℙ(E_i,2) . In order to better illustrate how ℙ(E_i) can be evaluated, define E̅_i,1 as the particular event that U_n chooses P_n, 1≤ n ≤ i. By using the error probability defined in the lemma, ℙ_e,n, ℙ(E̅_i,1) can be expressed as follows: ℙ(E̅_i,1)= ℙ_K^k_1(1-ℙ_e,1)^k_1⋯ℙ_K^k_K(1-ℙ_e,K)^k_K, where k_n=1, for 1≤ n ≤ i, and k_n=0 for i+1≤ n ≤ K. By using the general expression shown in (<ref>) and enumerating all the possible choices of k_n, 1≤ n ≤ K, ℙ( E_i,1) can be evaluated as follows: ℙ( E_i,1) =i!k_1+⋯+k_K=i max{ k_1, ⋯,k_K}=1∑ℙ_K^k_1(1-ℙ_e,1)^k_1⋯ℙ_K^k_K(1-ℙ_e,K)^k_K =i!ℙ_K^ik_1+⋯+k_K=i max{ k_1, ⋯,k_K}=1∑(1-ℙ_e,1)^k_1⋯(1-ℙ_e,K)^k_K, where i! is the permutation factor since the event where U_1 and U_2 choose P_1 and P_2, respectively, is different from the event in which U_1 and U_2 choose P_2 and P_1, respectively. Similar to ℙ( E_i,1), ℙ( E_i,2) can be obtained as follows: ℙ( E_i,2) =k_1+⋯+k_K =m-i ∑(m-i)!/k_1! ⋯ k_K!ℙ_K^k_1ℙ_e,1 ^k_1×⋯×ℙ_K^k_Kℙ_e,K ^k_K, where the multinomial coefficients (m-i)!/k_1! ⋯ k_K! is needed as explained in the following. Among the m-i unsuccessful users, if k_1 users choose P_1, there are m-i k_1 possible cases. For the remaining m-i-k_1 users, if k_2 users choose P_2, there are further m-i-k_1 k_2 cases. Therefore, the total number of cases for k_n users to choose P_n, 1≤ n≤ K, is given by m-i k_1⋯m-i-k_1-⋯-k_K k_K = (m-i)!/k_1! ⋯ k_K!. It is interesting to point out that for E_i,1, the reason for having coefficient i! can be explained in a similar manner, since i!/k_1! ⋯ k_K!=i!, if each k_n is either one or zero. By using (<ref>), (<ref>), and (<ref>), probability P_j,j can be expressed as follows: P_j,j = 1-∑^K_i=1P̅_j,j+i = 1-∑^K_i=1∑^M-j_m=i+1M-j mℙ_ TX^m (1- ℙ_ TX) ^M-j-m ×m iℙ(E_i,1) ℙ(E_i,2). By substituting (<ref>) and (<ref>) into (<ref>) and with some algebraic manipulations, the expression for P_j,j can be explicitly obtained as shown in the lemma. By using the difference between E_j,j+i and E̅_j,j+i, probability P_j,j+i can be obtained from P̅_j,j+i as follows: P_j,j+i = ∑^M-j_m=i+1M-j mℙ_ TX^m (1- ℙ_ TX) ^M-j-m ×M-j-i/M-jP̅_m,i. The proof of the lemma is complete. IEEEtran
http://arxiv.org/abs/2307.14346v1
20230705163642
Multi-objective Deep Reinforcement Learning for Mobile Edge Computing
[ "Ning Yang", "Junrui Wen", "Meng Zhang", "Ming Tang" ]
cs.NI
[ "cs.NI", "cs.AI", "cs.LG" ]
Multi-objective Deep Reinforcement Learning for Mobile Edge Computing Ning Yang, Junrui Wen, Meng Zhang*, Ming Tang Ning Yang and Junrui Wen are with Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China. (e-mail: [email protected], [email protected]). Meng Zhang is with the ZJU-UIUC Institute, Zhejiang University, Zhejiang, 314499, China. (e-mail: [email protected]). Ming Tang is with the Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, 518055, China. (e-mail: [email protected]). (*Corresponding author: Meng Zhang) Received ; accepted ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= Mobile edge computing (MEC) is essential for next-generation mobile network applications that prioritize various performance metrics, including delays and energy consumption. However, conventional single-objective scheduling solutions cannot be directly applied to practical systems in which the preferences of these applications (i.e., the weights of different objectives) are often unknown or challenging to specify in advance. In this study, we address this issue by formulating a multi-objective offloading problem for MEC with multiple edges to minimize expected long-term energy consumption and transmission delay while considering unknown preferences as parameters. To address the challenge of unknown preferences, we design a multi-objective (deep) reinforcement learning (MORL)-based resource scheduling scheme with proximal policy optimization (PPO). In addition, we introduce a well-designed state encoding method for constructing features for multiple edges in MEC systems, a sophisticated reward function for accurately computing the utilities of delay and energy consumption. Simulation results demonstrate that our proposed MORL scheme enhances the hypervolume of the Pareto front by up to 233.1% compared to benchmarks. Our full framework is available at <https://github.com/gracefulning/mec_morl_multipolicy>. Mobile edge computing, multi-objective reinforcement learning, resource scheduling. § INTRODUCTION The rise of next-generation networks and the increasing use of mobile devices have resulted in an exponential growth of data transmission and diverse computing needs. With the emergence of new computing-intensive applications, there is a possibility that device computing capacity may not suffice. Cloud computing is one solution that can provide the necessary resources, but it may also result in latency issues. To address this challenge, mobile edge computing (MEC) has emerged as a promising computing paradigm that offloads computing workload to edge or cloud networks and can achieve low latency and high efficiency <cit.>. In MEC systems, task offloading is crucial in achieving low latency and energy consumption <cit.>. By selectively offloading computing tasks to edge or cloud users based on their requirements, MEC systems can optimize resource utilization and improve performance. For example, edge servers may be effective for low-latency tasks that require real-time processing, while cloud users may be more suitable for computationally intensive tasks. Additionally, other factors, such as edge load and transmission rate, need to be considered when designing offloading schemes. Task offloading schemes in MEC systems present two key challenges. The natural MEC network environments are full of dynamics and uncertainty. The scheduling of offloading in MEC systems is challenging due to the dynamic and unpredictable nature of users' workloads and computing requirements. The presence of stochastic parameters in the problem poses challenges to the application of traditional optimization methods. Myopically optimizing the offloading decision of the current step is ineffective since it cannot account for long-term utilities. The application of deep reinforcement learning (DRL) has shown substantial potential in addressing sequential decision-making problems and is an attractive technique for dynamic MEC environments <cit.>. The existing works have demonstrated the effectiveness of applying DRL in MEC systems to address unknown dynamics. For instance, Cui et al. <cit.> employed DRL to solve the user association and offloading sub-problem in MEC networks. Lei et al. <cit.> investigated computation offloading and multi-user scheduling algorithms in edge IoT networks and proposed a DRL algorithm to solve the continuous-time problem, supporting implementation based on semi-distributed auctions. Jiang et al. <cit.> proposed an online DRL-based resource scheduling framework to minimize the delay in large-scale MEC systems. However, there is another challenge that requires consideration. Users who initiate tasks may have diverse preferences regarding delay and energy consumption. In various mobile applications such as health care, transportation, and virtual reality, among others, delay in processing data can have serious consequences, particularly in emergency situations. However, in industrial and unmanned aerial networks, energy consumption is subject to strict limits, and thus, computing applications in these areas may prioritize energy over delay. Therefore, offloading scheduling in MEC systems requires a well-designed balance between delay and energy consumption. Moreover, one of the most critical considerations in designing an offloading scheme for MEC systems is that target applications may not be known in advance. Regretfully, existing studies on MEC (e.g., <cit.>), most of them have focused exclusively on single-objective methods. In practice, many scheduling problems in MEC systems are in nature multi-objective. Since these studies have not taken into account multi-objective methods, they cannot address the second challenge of MEC systems, which is dealing with diverse and unknown preferences. The dynamic and uncertain nature of the environments, the diversity of preferences, and the computational infeasibility of classical methods motivate us to seek out new methodologies to address these issues. Note that although some may argue that we can still directly apply single-objective DRL by simply taking a weighted sum (known as scalarization), this is, in fact, not true due to the following issues <cit.>: * Impossibility: Weights may be unknown when designing or learning an offloading scheme. * Infeasibility: Weights may be diverse, which is true when MEC systems have different restrictive constraints on latency or energy. * Undesirability: Even if weights are known, nonlinear objective functions may lead to non-stationary optimal policies. To effectively address these challenges, we propose employing multi-objective reinforcement learning (MORL) to design a task offloading method. We summarize our main contributions as follows: * Multi-objective MEC Framework: We formulate the multi-objective MDP (Markov decision process) problem framework. Compared with previous works, our framework focuses on the Pareto optimal solutions, which characterize the performance of the offloading scheduling policy with multiple objectives under different preferences. * Multi-objective Decision Model: We propose a novel MORL method based on proximal policy optimization (PPO) to solve the multi-objective problem. Our proposed method aims to achieve the Pareto near-optimal solution for diverse preferences. Moreover, we introduce a well-designed encoding method to construct features for multi-edge systems and a sophisticated reward function to compute delay and energy consumption. * Numerical Results: Compared to benchmarks, our MORL scheme increases the hypervolume of the Pareto front up to 233.1%. § SYSTEM MODEL We consider a set of servers ℰ={0,1,2,...,E} with one remote cloud server (denoted by index 0) and E edge servers, and consider a set of users 𝒰={1,2,...,U} in a MEC system, as shown in Fig. <ref>. We use index e ∈ℰ to denote a server. Index u ∈𝒰 denotes a user. Our model is a continuous-time system and has discrete decision steps. Consider one episode consisting of T steps, and each step is denoted by t ∈{ 1,2,...,T}, each with a duration of Δ t seconds. -1pt Multiple users request MEC services from servers. At the beginning of each step, the arrival time of a series of tasks follows a Poisson distribution for each user, and the Poisson arrival rate for each user is λ_p. The tasks are placed in a queue with a first in, first out (FIFO) queue strategy. In each step, the system will offload the first task in the queue to one of the servers. Then the task is removed from the queue. Let ℳ = {1,2,..., M} denote the set of tasks in an episode. We use m ∈ℳ to denote a task and use L_m to denote the size of task m, which follows an exponential distribution <cit.> with mean L̅. We consider a Rayleigh fading channel model in the MEC network. We denote h∈ℝ^U × (E+1) as the U× (E+1) channel matrix. Thus, the achievable data rate from user u to server e is C_u,e = Wlog_2(1 + p^ off|h_u,e|^2/σ^2), ∀ u∈𝒰, e∈ℰ, where σ ^2 is additive white Gaussian noise (AWGN) power, and W is the bandwidth. The offloading power is p^ off, and the channel coefficient from user u to server e is h_u,e. Offloading: We denote the offloading decision (matrix) as x={x_m,e}_m∈ℳ,e∈ℰ, where x_m,e∈{0, 1} is an offloading indicator variable; x_m,e=1 indicates that task m is offloaded to server e. If task m comes from user u. The offloading delay for task m is given by <cit.> T_m^off = ∑_e ∈ℰx_m,eL_m/C_u,e,   ∀ m∈ℳ. The offloading energy consumption for task m with offloading power p^ off is E_m^off = p^offT_m^off,  ∀ m∈ℳ. Execution: Each server executes tasks in parallel. We denote the beginning of step t as time instant τ_t, given by τ_t=tΔ t. The computing speed for each task in server e at time instant τ_t is q_e(τ_t) = f_e/n_e^ exe(τ_t)η,  ∀ e∈ℰ, -5pt where f_e is the CPU frequency (in cycles per second) of server e, and η is the number of CPU cycles required for computing a one-bit task. We define n_e^ exe(τ_t) as the number of tasks that are being executed in server e at time τ_t. The n_e^ exe(τ_t) tasks share equally the computing resources of server e. Thus, we give the relation between task size L_m and execution delay T_m^ exe for task m as [ L_m = g_m(T_m^ exe)  = ∑_e ∈ℰx_m,e∫_mΔ t+T_m^ off^mΔ t+T_m^ off+T_m^ exeq_e(τ) dτ,∀ m∈ℳ, ] where τ is a time instant. The integral function g_m(T_m^ exe) denotes the aggregate executed size for task m from mΔ t+T_m^ off to mΔ t+T_m^ off+T_m^ exe. Therefore, execution time delay T_m^ exe of task m is T_m^ exe=g_m^-1(L_m), ∀ m∈ℳ. The total energy consumption of execution for task m is given by <cit.> [ E_m^ exe =∑_e ∈ℰx_m,eκηf_e^2 L_m,∀ m∈ℳ, ] where κ denotes an effective capacitance coefficient for each CPU cycle. To summarize, the overall delay and the overall energy consumption for task m∈ℳ are T_m= T_m^ off + T_m^ exe,E_m= E_m^ off + E_m^ exe, respectively. The mean of task size L̅ represents the demand for tasks. If the computational capability of the system exceeds the demand, the scheduling pressure decreases. Conversely, if the demand surpasses the capability, the system will continuously accumulate tasks over time. Therefore, we consider a system that balances computational capability and task demand. The mean of task size L̅ satisfies Δ t(∑_e ∈ℰf_e/η)=λ_p L̅ U, . §.§ Problem Formulation Based on different application scenarios, MEC networks have diverse preferences over energy consumption and delay. Therefore, we aim to design a scheduling policy to achieve the Pareto optimal solution between energy consumption and delay. We cannot directly apply single-objective DRL by simply taking a weighted sum due to impossibility (i.e., weights may be unknown), infeasibility (i.e., MEC systems have different restrictive constraints on latency or energy), and undesirability (i.e., non-stationary optimal policies). This motivates us to use MORL to achieve Pareto optimal solution for any potential preference. We introduce the preference vector ω=(ω_ T,ω_ E) to weight delay and energy consumption, which satisfies ω_ T + ω_ E=1. The subscript T denotes delay about, while the subscript E denotes energy consumption about in our study. A (stochastic) policy is a mapping π: 𝒮×𝒜→ [0,1], where 𝒮 is the state space of the system and 𝒜 is the offloading action space, we will formally define them in next section. For any given task m and system state, policy π selects an offloading decision x_m,e according to a certain probability distribution. Given any one possible ω, the multi-objective resource scheduling problem under the policy π is given by min_π 𝔼_x∼π[ ∑_m∈ℳγ^m (ω_ T T_m+ω_ E E_m )] s.t. x_m,e∈{ 0,1},  ∀ m∈ℳ,∀ e∈ℰ, ∑_e∈ℰx_m,e≤ 1,  ∀ m∈ℳ, where constraint (<ref>) restricts task offloading variables to be binary, and constraint (<ref>) guarantees that each task can be only offloaded to one server. A discount factor γ characterizes the discounted objective in the future. The expectation 𝔼 accounts for the distribution of the task size L_m, the arrival of users , and stochastic policy π. §.§ Multi-objective Metrics To facilitate multi-objective analysis, we further introduce the following notions. Consider a preference set Ω={ω_1,ω_2,...,ω_n} with n preferences. A scheduling policy set Π=[π_1,π_2,...,π_n] with n policies solving problem (<ref>) given corresponding preferences in Ω. Let y denote the performance, given by y={y_ T,y_ E}={∑_m∈ℳT_m,∑_m∈ℳE_m}. A performance of Π is denoted as Y={y^π_1,y^π_2,...,y^π_n}. We consider the following definition to characterize the optimal trade-offs between two performance metrics: For a policy set Π, Pareto front PF(Π) is the undominated set : PF(Π)={π∈Π | ∄π^'∈Π:y^π^'≻_P y^π}, where ≻_P is the Pareto dominance relation, satisfying [ y^π≻_P y^π^'; (∀ i : y_i^π≥ y_i^π^') (∃ i : y_i^π > y_i^π^'), i ∈{ T, E}. ] We aim to approximate the exact Pareto front<cit.> by searching for policies set Π. The following hypervolume metric can measure the quality of an approximation: In the multi-objective MEC scheduling problem, as a Pareto front approximation PF(Π), the hypervolume metric is [ 𝒱(PF(Π))=∫_ℝ^2𝕀_V_h(PF(Π))(z)dz, ] where V_h(PF(Π))={z ∈ Z| ∃π∈ PF(Π) : y^π≻_P z ≻_P y^ ref}, and y^ ref∈ℝ^2 is a reference performance point. Function 𝕀_V_h(PF(Π)) is an indicator function that returns 1 if z ∈ V_h(PF(Π)^') and 0 otherwise. The multi-objective resource scheduling problem is still a challenge for MEC networks for the following reasons: * The natural MEC network environments are full of dynamics and uncertainty, leading to unknown preferences of MEC systems. * The computation complexity of the conventional optimization method is demanding since the goal is to get a vector reward instead of a reward value. The objective function (<ref>) and the feasible set of constraints (<ref>) and (<ref>) are non-convex due to binary variables x. The aforementioned problems motivate us to design a MORL scheme to solve (<ref>). § MORL SCHEDULING SOLUTION This section considers the situation of multiple preferences. We consider that a (central) agent makes all offloading decisions in a fully-observable setting. We model the MEC environment as a MOMDP framework. In the subsection, we first introduce the MOMDP framework, which includes a well-designed state encoding method and a sophisticated reward function. Then, we present our algorithm by introducing aspects including the neural network architecture and policy update method. §.§ The MOMDP Framework A MOMDP is a tuple ⟨𝒮, 𝒜, 𝒯, γ, μ, ℛ⟩ that contains state space 𝒮, action space 𝒜, probabilistic transition process 𝒯: 𝒮 × 𝒜 → 𝒮, discount factor γ∈ [0, 1), a probability distribution over initial states μ :𝒮→ [0, 1], and a vector-valued reward function ℛ: 𝒮 × 𝒜→ℝ^2 that specifies the immediate reward for the delay objective and the energy consumption objective. For a decision step t, an agent offloads task m from user u. It has m=t for task index m and step-index t. We specify the MOMDP framework in the following: State 𝒮: We consider E+1 servers (E edge servers and a cloud server). Hence, the state s_t ∈𝒮 at step t is a fixed length set and contains E+1 server information vectors. We formulate state s_t as s_t={ s_t,e | e ∈ℰ}. The information vector of server e at step t is s_t,e =(L_m,C_u,e,f_e,n_e^ exe(τ_t),E,ℬ_e),   ∀ e ∈ℰ. State s_t,e contains task size L_m, data rate C_u,e, CPU frequency f_e, the number of execution task n_e^ exe(τ_t), the number of edge server E, and task histogram vector ℬ_e, which is the residual size distribution for tasks executed in server e at time instant τ_t. That is, ℬ_e(τ_t)=(b_1,e^ exe(τ_t),b_2,e^ exe(τ_t),...,b_N,e^ exe(τ_t)). Histogram vector ℬ_e has N bins. We denote one of previous tasks as m^' and denote the execution residual size of task m^' at time instant τ_t as L_m^'^ res(τ_t). In Eq. (<ref>), the i-th value b_i,e^ exe(τ_t) in ℬ_e denotes the number of tasks with execution residual size L_m^'^ res(τ_t) within the range of [i-1,i) Mbits. Specifically, the last element b_N,e^ exe(τ_t) denotes the number of tasks with execution residual size L_m^'^ res(τ_t) within the range of [N-1,+∞) Mbits. The execution residual size L_m^'^ res(τ_t) is given by [ L_m^'^ res(τ_t) = L_m^'- min( g_m^'((τ_t-m^'Δ t ),L_m^'),; ∀τ_t ∈ [tΔ t,TΔ t],m^'∈{1,2,…,m-1}. ] -2pt Action 𝒜: The action a_t ∈𝒜 denotes that offloading task m to which server. The action space is 𝒜={0,1,2,…,E}. Hence, the action at step t is represented by the following a_t = ∑_e ∈ℰ ex_m,e(t). Transition 𝒯: It describes the transition from s_t to s_t+1 with action a_t, which is denoted by P(s_t + 1|s_t,a_t). Reward ℛ: Unlike a classical MDP setting in which each reward is a scalar, a multi-objective setting requires a vector. Therefore, our reward (profile) function is given by ℛ: 𝒮 × 𝒜→ℝ^2. We denote the reward of energy consumption and delay as r_ E and r_ T. If the agent offloads task m to server e at step t, the reward of energy consumption for state s_t and action a_t is r_ E(s_t,a_t) = -Ê_m, where Ê_m is the estimated energy consumption of task m. Through (<ref>), we can compute the energy consumption of task m. The MORL algorithm maximizes the reward, which is thus the negative of energy consumption. For one episode, the total reward for energy consumption is given by R_ E=∑_t=1^T r_ E(s_t,a_t) = -∑_m ∈ℳÊ_m. The reward for the delay is r_ T(s_t,a_t) = -(T̂_m + ∑_m^'∈ℳ_e(τ_t)ΔT̂_m^'^a_t), where T̂_m is the estimated delay for task m, and ℳ_e(τ_t) is a set of tasks, which are executed in server e at time instant τ_t. The estimated correction of delay ΔT̂_m^'^a_t describes how much delay will increase to task m^' with action a_t. For one episode, the total reward of delay has R_ T=∑_t=1^T r_ T(s_t,a_t) = -∑_m ∈ℳ T_m. To compute reward r_T, we rewrite Eq.(<ref>) as r_ T(s_t,a_t) =- T̂_m - ∑_m^'∈ℳ_e(τ_t) (T̂_m^'^a_t - T̂_m^'^a^*(t)), where T̂_m^'^a_t denotes the estimated residual delay of task m^' with taking action a_t at step t. The residual delay of task m^' without taking action a_t is T̂_m^'^a^*(t), which is the estimated residual delay at the end of step t-1. Next, we introduce the computation of the two cases. -4pt (1) The case without taking action a_t: For task set ℳ_e(τ_t) with n_e^ exe(τ_t) tasks, the execution residual size is a set ℒ_ℳ_e(τ_t)^ res = { L_m^'^ res(τ_t) | m^'∈ℳ_e(τ_t)}. We sort residual task size set ℒ_ℳ_e(τ_t)^ res in the ascending order and get a vector L_ℳ_e(τ_t)^ sort=(L_1,e^ sort(τ_t),L_2,e^ sort(τ_t),...,L_n_e^ exe(τ_t),e^ sort(τ_t)), where L_i,e^ sort(τ_t) is the i-th least residual task size in ℒ_ℳ_e(τ_t)^ res. Specifically, we define L_0,e^ sort(τ_t)=0. Then, we have ∑_m^'∈ℳ_e(τ_t)T̂_m^'^a^*(t) =∑_i=1^n_e^ exe(τ_t) (n_e^ exe(τ_t)-i+1) T̂_i,e^ dur                  =∑_i=1^n_e^ exe(τ_t)η/f_e(n_e^ exe(τ_t) -i+1)^2 (L_i,e^ sort(τ_t) - L_i-1,e^ sort(t)), where T̂_i,e^ dur denotes the estimated during of time from the completing instant of residual task L_i-1,e^ sort(τ_t) to the completing instant of residual task L_i,e^ sort(τ_t). (2) The case with action a_t: The MEC system completes offloading task m at time instant τ_t^' = τ_t + T_m^ off. We consider a high-speed communication system that offloading delay T_m^ off is short than the duration of one step Δ t and satisfies T_m^ off < Δ t. For task set ℳ_e(τ_t^') with n_e^ exe(τ_t^') tasks, the execution residual size is a set ℒ_ℳ_e(τ_t^')^ res = { L_m^ res(τ_t^') | m ∈ℳ_e(τ_t^')}. We sort set ℒ_ℳ_e(τ_t^')^ res in the ascending order and get a vector L_ℳ_e(τ_t^')^ sort=(L_1,e^ sort(τ_t^'),L_2,e^ sort(τ_t^'),...,L_n_e^ exe(τ_t^'),e^ sort(τ_t^')), where L_i,e^ sort(τ_t^') is the i-th least residual task size in ℒ_ℳ_e(τ_t^')^ res. Then, it satisfies T̂_m + ∑_m^'∈ℳ_e(τ_t^')T̂_m^'^a_t                                                = ∑_i=1^n_e^ exe(τ_t)(n_e^ exe-i+1) min( T̂_i,e^ dur, max( T̂_m^ off -∑_j=1^i-1T̂_j,e^ dur, 0 ))     + ∑_i=1^n_e^ exe(τ_t^')η/f_e(n_e^ exe(τ_t^')-i+1)^2(L_i,e^ sort(τ_t^') - L_i-1,e^ sort(τ_t^')) + T̂_m^ off, where T̂_m^ off is the estimated offloading delay for task m with Eq.(<ref>). In Eq.(<ref>), the first term to the right of the equation estimates the sum of delay for tasks ℳ_e(τ_t) from time instant τ_t to τ_t^'. The second term to the right of Eq.(<ref>) estimates the sum of delay for tasks ℳ_e(τ_t^') from time instant τ_t^' to infinity. The expression η/f_e(L_i,e^ sort(τ_t^') - L_i-1,e^ sort(τ_t^') in Eq. (<ref>) represents the required time from completing residual size L_i-1,e^ sort(τ_t^') to completing residual size L_i,e^ sort(τ_t^'). To simplify the calculation of L_1,e^ sort(τ_t^')-L_0,e^ sort(τ_t^'), we define L_0,e^ sort(τ_t^')=0 specifically. To summarize, if the agent offloads task m to server e at step t, the reward of delay is r_ T(s_t,a_t) = -T̂_m^ off + ∑_i=1^n_e^ exe(τ_t) (n_e^ exe(τ_t)-i+1) T̂_i,e^ dur                     - ∑_i=1^n_e^ exe(τ_t) (n_e^ exe-i+1) min( T̂_i,e^ dur, max( T̂_m^ off-∑_j=1^i-1T̂_j,e^ dur, 0))        - ∑_i=1^n_e^ exe(τ_t^')η/f_e(n_e^ exe(τ_t^')-i+1)^2(L_i,e^ sort(τ_t^') - L_i-1,e^ sort(τ_t^')). To achieve the MORL algorithm, we compute a scalarized reward given preference ω: r_ω(s_t,a_t) =ω^T× (α_ Tr_ T(s_t,a_t),α_ Er_ E(s_t,a_t)), where α_ T and α_ E are coefficients for adjusting delay r_ T(t) and energy consumption r_ E(t) to the same order of magnitude. The total reward of one episode is R_ω=∑_t=1^T r_ω(s_t,a_t). §.§ MORL Scheduling We train DRL-based scheduling policies based on a PPO algorithm <cit.>, which is a family of policy gradient (PG) methods. The PPO algorithm can sample the data from the transition several times instead of one time within each episode. It improves the sampling efficiency than traditional PG methods. The neural networks with parameters θ contain an actor network and a critic network. In the training phase, the MORL algorithm trains a parametric network for each preference. In the evaluation phase, the parametric network evaluates the Pareto front of energy consumption and delay for multi-edge servers in the MEC environment. We use generalized advantage estimator (GAE) technology to reduce the variance of policy gradient estimates<cit.>. The GAE advantage function for objective i ∈{ T, E } is [ Â_i(t) = ∑_t^' = t^T - 1γλ( α _ir_i(s_t',a_t') + γV_i,θ(s_t' + 1) -V_i,θ(s_t')) , ] where λ is a GAE discount factor within [0,1], and V_i,θ(s(t)) denotes the value of state s(t). Value function V_i,θ(·) is estimated by a critic network. In the PPO algorithm, the gradient direction of objective i ∈{ T, E} is given as [ ∇_θ L_i^clip(θ)=𝔼_t [min( r^ pr_t(θ), clip(r^ pr_t(θ), 1-ϵ, 1+ϵ) ); Â_i(t) ∇logπ_θ(a_t|s_t) ], ] where ϵ is a clip hyperparameter. The probability ratio is r_t^ pr(θ)=π_θ(a_t|s_t)/π_θ_old(a_t|s_t). The surrogate objective is r_t^ pr(θ)Â_t, which corresponds to a conservative policy iteration. The objective is constrained by clip(r_t^ pr(θ)Â_t, 1-ϵ, 1+ϵ), to penalize the policy move outside interval [1-ϵ, 1+ϵ]. Given the gradient directions of the two objectives, a policy can reach the Pareto front by following a direction in ascent simplex <cit.>. An ascent simplex is defined by the convex combination of single–objective gradients. As shown in Fig. <ref>, the green arrow and blue arrow denote the gradient directions of the delay objective and energy consumption objective, respectively. The light blue area stands for an ascent simplex. For reward function r_ω(·), the gradient direction of preference ω is [ ∇_θ L_ω^clip(θ)=𝔼_t [min( r_t^ pr(θ), clip(r_t^ pr(θ), 1-ϵ, 1+ϵ) ); ω^T(Â_1(t),Â_2(t)) ∇logπ_θ(a_t|s_t) ]; =ω^T(∇_θ L_1^clip(θ), ∇_θ L_2^clip(θ)). ] The vector ∇_θ L_ω^clip(θ) is a gradient direction in ascent simplex. It makes a policy to the Pareto front by optimizing neural network parameters θ. As an example shown in Fig. <ref>, a neural network contains convolution layers and multi-layer perceptron (MLP) layers. The convolution layers encode the input state with point-wise convolution kernel and turn information vector s_t of each server to feature vector F. We reshape all feature vectors and concatenate them to get the total feature vector. The MLP layers encode the total feature vector to get the output. For an actor-network, the output is probability π_θ(a_t|s_t) of each action. For a critic network, the output is estimated value ω^ T[V_ T,θ(s_t),V_ E,θ(s_t)] for preference ω. Additionally, we apply deep residual learning technology <cit.> to build the neural network architecture to address the problem of vanishing/exploding gradients. We present the proposed MORL algorithm in Algorithm 1. For each preference ω in set Ω, we train a policy with PPO method to maximize reward R_ω and approximate Pareto front PF(Π). To improve the training efficiency achieved by <cit.>, we reuse trained neural network parameters θ_ω_i (i∈{1,2,…,n-1}) to initialize the next parameters θ_ω_i+1, with a similar preference. § SIMULATION RESULTS In this section, we evaluate the performances of the MORL scheduling scheme and compare it with benchmarks. We introduce the simulation setup and evaluation metrics. Then, we analyze the Pareto fronts and compare them with the benchmarks. §.§ Simulation Setup We set the preference set as Ω with an equal interval 0.02 and obtain 50 preferences to fit the Pareto front. Each preference's performance contains total delay and energy consumption for all tasks in one episode. We evaluate a performance (delay or energy consumption) with an average of 1000 episodes. Furthermore, we analyze the Pareto front of the proposed scheme and compare it with benchmarks. A disk coverage has a radius of 1000m to 2000m for a cloud server and 50m to 500m for an edge server. Each episode needs to initial different radiuses for the cloud and edge servers. We set the mean of task size L̅ according to Eq. (<ref>). §.§ Evaluation Metrics We consider the following metrics to evaluate the performances of the proposed algorithms. * Energy Consumption: The total energy consumption of one episode given as ∑_m=1^M E_m^ off + E_m^ exe, and average energy consumption per Mbits task of one episode given by ∑_m=1^ME_m^ off + E_m^ exe/ML̅. * Task Delay: The total task delay given as ∑_m=1^M T_m^ off + T_m^ exe and average delay per Mbits task of one episode given by ∑_m=1^MT_m^ off + T_m^ exe/ML̅. * Pareto Front: PF(Π)={π∈Π | ∄π^'∈Π:y^π^'≻_Py^π}, where the symbols are defined by Eq. (<ref>). * Hypervolume metric: 𝒱(PF(Π))=∫_ℝ^2𝕀_V_h(PF(Π))(z)dz, where the symbols are defined by Eq. (<ref>). §.§ Simulation results §.§.§ Pareto Front Analysis Fig. <ref> presents the Pareto front of the proposed MORL scheme. In this scenario, the number of edge servers is E=8, and the mean of task size L̅=20 Mbits. The Pareto front shows that minimizing the delay (the leftmost point) increases energy by 67.3%, but minimizing energy (the rightmost point) increases the delay by 77.6%. Fig. <ref> shows the points of the Pareto front with trained and untrained preferences. Each untrained preference lies intermediate to the adjacent trained preferences. The result shows that by reusing trained parameters to the most similar preference, our MORL scheme has generalization for new preferences. §.§.§ Performance Comparison with Benchmarks We evaluate the performance of the proposed MORL algorithms and compare it with a linear upper confidence bound (LinUCB)-based scheme <cit.>, a heuristics-based scheme, and a random-based scheme. LinUCB algorithms belong to contextual multi-arm bandit (MAB) algorithms, widely used in task offloading problems <cit.>. Some work <cit.> apply heuristic methods to schedule for offloading. * LinUCB-based scheme: Offloading scheme based on a kind of contextual MAB algorithm. This scheme uses states as MAB contexts and learns a policy by exploring different actions. * Heuristics-based scheme: Heuristic methods greedily select the server with the optimal weighted sum of estimated running speed and energy consumption for the current step. * Random-based scheme: The agent offloads a task to a cloud server or a random edge server according to probability. We adjust the probability to compute a Pareto front. Fig. <ref> illustrates the Pareto front comparison of the proposed MORL scheme with other schemes. In this scenario, the system has E = 8 and L̅=20 Mbits. We select the position which denotes the maximum delay and energy consumption of the performance profiles in Fig. <ref> as the reference point to compute the hypervolumes. The hypervolume of the proposed MORL scheme is 80.7, the LinUCB-based scheme is 69.9, the heuristics-based scheme is 63.9, and a random-based scheme is 24.2. Compared with a LinUCB-based scheme and a random-based scheme, the proposed MORL scheme increases the hypervolume of the Pareto front by 80.7-69.9/69.9=15.5% and 80.7-24.2/24.2=233.1%. As shown, the proposed MORL scheme significantly outperforms other schemes. The MORL scheme has dynamic adaptability to learn the dynamics of task arrival and server load, which enables it to achieve better scheduling. §.§.§ Pareto Front Analysis in Multi-edge Scenarios We evaluate the Pareto front of the proposed MORL algorithm in scenarios with different edge server quantities. Fig. <ref> illustrates the Pareto fronts of the proposed MORL algorithm in the case of edge quantity E ∈{4,6,8,10}. The mean of task size, represented by L̅, is determined by Eq. (<ref>) to balance the supply and demand of computational capability. The result shows that, in the balance case, the Pareto front of fewer edge servers and less demand case can dominate the more one. It means that while more edge servers may increase computational capability, matching them with more task demands may result in increased total energy consumption and task delay. The performances are computed per 1 Mbits task in Fig. <ref> for a fair comparison. As the number of edge servers increases, the Pareto front of a more edge servers case can dominate the less one. The result shows that though more edge servers match more task demands, deploying more edge servers can significantly improve delay and energy consumption per Mbits tasks for each preference. § CONCLUSION In this work, we investigated the offloading problem in MEC systems and proposed a MORL-based algorithm that can achieve Pareto fronts. A key advantage of the proposed MORL method is that it employs a MORL framework to offload tasks adopting various preferences, even untrained preferences. We present a novel MOMDP framework for the multi-objective offloading problem in MEC systems. Our framework includes two key components: (1) a well-designed encoding method to construct features of multi-edge MEC systems. (2) a sophisticated reward function to evaluate the immediate utility of delay and energy consumption. Simulation results demonstrate the effectiveness of our proposed MORL scheme, which achieves Pareto fronts in various scenarios and outperforms benchmarks by up to 233.1%. § ACKNOWLEDGMENTS The research leading to these results received funding from “Research on Combinatorial Optimization Problem Based on Reinforcement Learning” supported by Beijing Municipal Natural Science Foundation under Grant Agreement Grant No. 4224092. This work was supported in part by the National Natural Science Foundation of China under Grants 62202427 and Grants 62202214. In addition, it received funding from National Key R&D Program of China (2022ZD0116402). IEEEtran
http://arxiv.org/abs/2307.01419v1
20230704010654
Fermion Hierarchies in $SU(5)$ Grand Unification from $Γ_6^\prime$ Modular Flavor Symmetry
[ "Yoshihiko Abe", "Tetsutaro Higaki", "Junichiro Kawamura", "Tatsuo Kobayashi" ]
hep-ph
[ "hep-ph", "hep-th" ]
CTPU-PTC-23-27 EPHOU-23-012 Fermion Hierarchies in SU(5) Grand Unification from Γ_6^' Modular Flavor Symmetry 2cm Yoshihiko Abe^ a [[email protected]], Tetsutaro Higaki^ b [[email protected]], Junichiro Kawamura^ c [[email protected]] and Tatsuo Kobayashi^ d [[email protected]] 0.5cm ^a Department of Physics, University of Wisconsin, Madison, WI 53706, USA ^b Department of Physics, Keio University, Yokohama, 223-8522, Japan ^c Center for Theoretical Physics of the Universe, Institute for Basic Science (IBS), Daejeon 34051, Korea ^d Department of Physics, Hokkaido University, Sapporo 060-0810, Japan 1.5cm We construct a model in which the hierarchies of the quark and lepton masses and mixing are explained by the Γ_6^' modular flavor symmetry. The hierarchies are realized by the Froggatt-Nielsen-like mechanism due to the residual Z^T_6 symmetry, approximately unbroken at τ∼ i∞. We argue that the Γ_6^(') symmetry is the minimal possibility to realize the up-type quark mass hierarchies, since the Yukawa matrix is symmetric. We find a combination of the representations and modular weights and then show numerical values of 𝒪(1) coefficients for the realistic fermion hierarchies. Re Im § INTRODUCTION The Grand Unified Theory (GUT) is an attractive framework to understand the gauge structure of the Standard Model (SM) <cit.>. In the SU(5) GUT, the quarks and leptons in a generation are unified into 5 and 10 representations. As a result, for instance, the exactly opposite charges of the electron and the proton are manifestly explained. The SU(5) GUT predicts the unification of the three gauge coupling constants which is consistent with the Minimal Supersymmetric SM (MSSM) <cit.>. Due to the unification of the quarks and leptons, the Yukawa couplings are also constrained to be consistent with the SU(5) gauge symmetry, as we shall study in this paper. The modular flavor symmetry provides an interesting possibility to understand the flavor structures of the three generations of quarks and leptons in the SM <cit.>. Under the modular symmetry, Yukawa coupling constants are treated as the so-called modular forms, holomorphic functions of modulus τ. The finite modular groups Γ_N, N∈, are generalization of the non-Abelian discrete flavor symmetries <cit.>, as well studied in the literature <cit.>. In fact, we can find isomorphisms Γ_2 ≃ S_3, Γ_3 ≃ A_4, Γ_4 ≃ S_4, and so on. There have been many attempts to understand the flavor structures of the SM by the finite modular flavor symmetries <cit.>. There are the hierarchies in the masses of the SM fermions and in the Cabibbo-Kobayashi-Maskawa (CKM) matrix. The residual _N symmetry of the modular symmetry induces hierarchies of Yukawa couplings if the modulus τ has a value near a fixed point. For instance, if τ∼ i∞, the _N symmetry remains in the modular Γ_N symmetry, and then the Yukawa couplings are suppressed by powers of q_N := e^ -2πImτ/N≪ 1 depending on the charge under the _N symmetry. This is a realization of the Froggatt-Nielson (FN) mechanism, where the powers of the flavon field are replaced by those of q_N. Recently, this idea is applied for the quark sector based on the modular A_4 <cit.>, A_4× A_4 × A_4 <cit.>, S_4 <cit.> and Γ_6 <cit.> symmetries, and the lepton sector is also studied in Refs. <cit.>. In Ref. <cit.>, the present authors constructed a model for both quark and lepton sectors based on Γ_4^'≃ S_4^' symmetry, where Γ_N^' is the double covering of Γ_N symmetry. In this work, we construct a model to explain the hierarchies of the quarks and leptons in the SU(5) GUT. It will be turned out that the residual symmetry should be _N with N≥ 5, to realize the hierarchy of the up-type quark masses because the up Yukawa matrix is a symmetric matrix. Since the Γ_5^(') symmetry does not have proper representation to realize the hierarchy <cit.> as explained later, we shall consider the Γ_6^' symmetry as the minimal possibility. We consider Γ_6^', double covering of Γ_6, to construct a model with non-singlet representations. We also study the neutrino sector assuming the type-I seesaw mechanism by introducing three SU(5) singlets. This paper is organized as follows. In Sec. <ref>, we briefly review the flavor structures of the SU(5) GUT and the modular flavor symmetry, and then discuss how to explain the hierarchies of the masses and mixing. The explicit model is studied and our results of numerical calculations for the charged fermions are shown in Sec. <ref>, and then the neutrino sector is discussed in Sec. <ref>. Section <ref> concludes. The details of the modular Γ_6^' symmetry are shown in App. <ref>. § FERMION HIERARCHIES IN SU(5) GUT §.§ SU(5) GUT The superpotential at the dimension-4 is given by W_4 = 1/2 y_ij_ABCDE 10^AB_i 10^CD_j H^E + h_ij 10^AB_i5_jAH_B, where A,B,⋯ = 1,2,3,4,5 are the SU(5) indices, and i,j = 1,2,3 are the flavor indices. The MSSM fields are in the SU(5) multiplets as 5_a = d_a, 5_3+α = _αβℓ^β 10^a,3+α = Q^aα, 10^ab = ^abcu_c, 10^3+α,3+β = ^αβe, where a,b,c = 1,2,3 and α,β = 1,2 are the SU(3)_C and SU(2)_L indices, respectively. In these expressions, 's with the indices are the complete anti-symmetric tensors. The Higgs doublets are included in the (anti-)fundamental representations H (H) as H^3+α = ^αβ H_uβ , H_3+α = H_dα. We assume that the triplets are so heavy that it will not induce too fast proton decay by a certain mechanism <cit.> [ As far as the triplet Higgses are at the GUT scale, the proton lifetime would be long enough if soft SUSY breaking parameters are larger than 100 TeV <cit.>. ]. Since the Yukawa matrices of the down quarks and charged leptons are given by the common Yukawa matrix h_ij, the dimension-4 Yukawa couplings in Eq. (<ref>) can not explain the realistic fermion masses. In this work, we assume that the SU(5) gauge symmetry is broken by an adjoint field Σ_A^B whose VEV is given by Σ = v_Σ×diag(2,2,2,-3,-3), so that the GUT symmetry is broken down to the SM gauge symmetry. We shall consider the dimension-5 interaction involving the adjoint field, W_5 ∋ 1/Λ C_ij 10^AB_i Σ_A^C 5_jCH_B, which splits the Yukawa couplings of the down quarks and the charged leptons. Here Λ is a cutoff scale. The dimension-5 operators inserting the adjoint field to other places will not change the flavor structure, and thus we do not write them explicitly. The Yukawa couplings for the MSSM fields are given by W = u Y_u Q H_u + d Y_d H_d + e Y_e ℓ H_d, where the Yukawa matrices are given by [Y_u]_ij = y_ij, [Y_d]_ij = h_ij + 2 R_Σ C_ij, [Y_e]_ij = h_ij - 3 R_Σ C_ij, where R_Σ := v_Σ/Λ. Thus the R_Σ splitting of the down-type Yukawa couplings is induced via the VEV of the adjoint scalar field v_Σ. §.§ Modular flavor symmetry We consider the so-called principal congruence subgroups Γ(N), N∈, defined as Γ(N):= {[ a b; c d ]∈ SL(2,), [ a b; c d ]≡[ 1 0; 0 1 ] mod N }, where Γ := SL(2,) = Γ(1) is the special linear group of 2× 2 matrices of integers. The modulus τ is transformed by the group Γ as τ→aτ+b/cτ+d. The finite modular group Γ_N^' is defined as a quotient group Γ_N^' := Γ/Γ(N), which can be generated by the generators, S = [ 0 1; -1 0 ], T = [ 1 1; 0 1 ], R = [ -1 0; 0 -1 ]. One can consider the quotient group Γ_N := Γ/Γ(N), where Γ:= Γ/^R_2 with ^R_2 being the _2 symmetry generated by R. Γ^'_N is the double covering of the group Γ_N. For N < 6, the generators of Γ_N^' satisfy S^2 = R, (ST)^3 = R^2 = T^N = 1, TR = RT, and those for Γ_N are given by taking R=1. At N=6, the generators should also satisfy ST^2 ST^3 ST^4 ST^3 = 1, in addition to those in Eq. (<ref>). Under Γ_N^', a modular form, a holomorphic function of τ, kr with representation r and modular weight k transforms as kr→ (cτ+d)^k ρ(r) kr(τ), where ρ(r) is the representation matrix of r. We assume that the chiral superfield Φ with representation r_Φ and weight -k_Φ transform as Φ→ (cτ+d)^-k_Φρ(r_Φ) Φ. If the modulus is stabilized near a fixed point τ∼ i, w := e^2π i/3 or i∞, there remains a residual symmetry ^S_4, ^ST_3 or ^T_N, respectively. Here, the superscript represents a generator associated with the residual symmetry. The modular invariant Kähler potential for the kinetic term is given by K ∋Φ^†Φ/(-iτ +iτ^*)^k_Φ, and hence the canonically normalized chiral superfield is given by Φ̂ = (√(2Imτ))^-k_ΦΦ. The normalization factors from √(2 Imτ) will be important for τ∼ i∞, as we shall consider in this work. §.§ Assignments of representations The hierarchical Yukawa matrices can be realized by the FN mechanism with a ℤ_N symmetry <cit.> if the modulus field is stabilized near a fixed point. We shall consider the FN-like mechanism by ℤ^T_N symmetry, and define := √(3) e^2π i τ/6 in Eq. (<ref>) throughout this paper. See Appendix <ref> for more details. Since the up-type Yukawa matrix is symmetric in the SU(5) GUT, the texture of Y_u is, in general, given by Y_u ∼[ ^2n ^n+m ^n; ^n+m ^2m ^m; ^n ^m 1; ], where ≪ 1 and n, m∈ℕ. The singular values are read as (^2n, ^2m, 1), and hence the minimal possibility to obtain the hierarchal up quark masses is (m,n) = (1,2), so that the singular values are given by (^4, ^2, 1). Thus N>4 is required to realize the hierarchy by a ℤ_N symmetry. The Γ_6^(') symmetry is the minimal possibility for the up quark mass hierarchy. N>4 can be realized only from the residual T symmetry in the modular Γ_N^(') symmetry with N>4, where ℤ^T_N symmetry remains unbroken at τ∼ i∞. In the case of N=5, however, there are no such representations whose T-charges are (0,-1,-2) ≡ (0,4,3) mod N=5 <cit.>. Therefore, N=6 is the minimal possibility to realize the hierarchal up Yukawa couplings. As shown in detail in Appendix <ref>, the irreducible representations of Γ_6 are <cit.> 1^s_b, 2_b, 3^s, 6, where s=0,1 and b=0,1,2. Since Γ_6 is isomorphic to Γ_2×Γ_3 ≃ S_3× A_4, upper (lower) indices can be considered as S_3 (A_4) indices. For the double covering Γ_6^', there are additional representations, 2^s_b, 4_b. The charges under the residual ^T_6 symmetry of the representations less than three dimensions are given by T(1^s_b) = 3s+2b, T(2_b)= [ 2b; 2b+3 ], T(3^s)= [ 3s; 3s+2; 3s+4 ], T(2^s_b) = [ 3s + 2b; 3s + 2b +2 ], modulo 6. At N=6, the texture can be realized for, 10_i=1,2,3 = 1^0_1 ⊕ 2^1_2 =: 10_1⊕ (10_2, 10_3). The ℤ^T_6-charges are (2, 1, 3). Since the doublet representation 2^1_2 exists only for Γ_6^, we should consider the double covering of Γ_6. We can replace 2^1_2 to 1^1_2⊕ 1^1_0 which exist in Γ_6, but we do not consider this case because the model is less predictive and may need larger modular weights to have certain modular forms <cit.>. Since the doublet lepton ℓ_i is in 5_i, the hierarchal structures of the neutrino masses and the PMNS matrix are determined by the representation of 5_i. As ∼ (m_u/m_t)^1/4∼0.05 for the charged fermion hierarchies, ℓ_i should have a common ℤ^T_6 charge, so that there is no hierarchy in the neutrino sector. Thus we assign 5_i=1,2,3 = 1^s_b ⊕ 1^s_b ⊕ 1^s_b =: 5_1⊕5_2 ⊕5_3 , with s=0,1 and b=0,1,2. The values of (s,b) will be determined to have the modular forms for a given modular weights. In this case, the singular values of the down quarks and charged leptons are given by ^q (^2, , 1), where q = 0,1,2,3 depends on (s,b). The textures of the CKM and PMNS matrices are given by V_CKM∼[ 1 ^2; 1 ; ^2 1 ], V_PMNS∼[ 1 1 1; 1 1 1; 1 1 1; ], and there is no hierarchy in the neutrino masses originated from . We discuss the neutrino sector explicitly in Sec. <ref>. We also note that the textures will also be modified by powers of √(2Imτ) as in Eq. (<ref>) depending on modular weights, as will be discussed in the next section in which we will take √(2Imτ)∼ 2.5. § MODEL §.§ Yukawa couplings We denote the modular weights of 10_1,2 and 5_1,2,3 as k_10 = (k_10^1, k_10^2), k_5 = (k_5^1,k_5^2,k_5^3). The modular invariant superpotential of the Yukawa couplings are given by W = α_1 ( 2k_10^11_1^0 10_1 10_1 ) H + α_2 ( k_10^1+k_10^22^1_2 10_1 10_2 )_1 H + α_3 ( 2k_10^23^010_2 10_2 )_1 H + ∑_i=1,2,3[ (β_i1+Σ/Λγ_i1) ( k_10^1+k_5^i1^s_2-b5_i 10_1 ) + (β_i2+Σ/Λγ_i2) ( k_10^2+k_5^i2^s+1_-b5_i 10_2 )_1 ] H, where the contractions of the SU(5) indices are implicit. It is assumed that Σ, H and H are trivial singlets with modular weight k=0. The symbol (⋯)_1 indicates that the trivial singlet combinations inside the parenthesis. Here α_i, β_1i and β_2i are 1 coefficients. The Yukawa matrices for the MSSM quarks and leptons are read as Y_u = 1/√(6)[ √(6)α_1 2k_10^11^0_1 √(3)α_2[k_10^1+k_10^22^1_2]_2 - √(3)α_2[k_10^1+k_10^22^1_2]_1; √(3)α_2[k^2_10+k^1_102^1_2]_2 -√(2)α_3 [Y_3^0^(2k_10^2)]_3 α_3[2k^2_103^0]_2; -√(3)α_2[k^2_10+k^1_102^1_2]_1 α_3[2k^2_103^0]_2 √(2)α_3 [2k^2_103^0]_1; ], Y_d = 1/√(2)[ √(2)β_11^d k_5^1+k_10^11^s+1_2-b β_12^d [k_5^1+k_10^22^s_-b]_2 -β_12^d [k_5^1+k_10^22^s_-b]_1; √(2)β_21^d k_5^2+k_10^11^s_2-b β_22^d [k_5^2+k_10^22^s+1_-b]_2 -β_22^d [k_5^2+k_10^22^s+1_-b]_1; √(2)β_31^d k_5^3+k_10^11^s_2-b β_32^d [k_5^3+k_10^22^s+1_-b]_2 -β_32^d [k_5^3+k_10^22^s+1_-b]_1 ; ], Y_e = 1/√(2)[ √(2)β_11^e k_5^1+k_10^11^s_2-b √(2)β_21^e k_5^2+k_10^11^s_2-b √(2)β_31^e k_5^3+k_10^11^s_2-b; β_12^e [k_5^1+k_10^22^s+1_-b]_2 β_22^e [k_5^2+k_10^22^s+1_-b]_2 β_32^e [k_5^3+k_10^22^s+1_-b]_2; -β_12^e [k_5^1+k_10^22^s+1_-b]_1 -β_22^e [k_5^2+k_10^22^s+1_-b]_1 -β_32^e [k_5^3+k_10^22^s+1_-b]_1; ], where β^d_ij := β_ij + 2R_Σγ_ij, β^e_ij := β_ij - 3R_Σγ_ij. Here, [kr]_i denotes the i-th component of the modular form kr. As shown in Eq. (<ref>), Y_d = Y_e^T up to the splitting at R_Σ because d∈5 and Q ∈ 10 while e∈ 10 and ℓ∈5. Note that the number of coefficients may change depending on the number of modular forms for a given modular weight. §.§ Assigning modular weights The minimal choice to obtain the realistic Yukawa matrices is k_10 = (2,3), k_5= (0,0,2), and we take (s,b) = (1,0) for 5_i = 1^s_b, so that the Yukawa matrices are given by Y_u = 1/√(6)[ √(6)α_1 Y^(4)_1^0_1 √(3)α_2[52^1_2]_2 - √(3)α_2[52^1_2]_1; √(3)α_2[52^1_2]_2 -√(2)α_3^i [63^0,i]_3 α_3^i [63^0,i]_2; -√(3)α_2[52^1_2]_1 α_3^i[63^0,i]_2 √(2)α_3^i [63^0,i]_1; ], Y_d = 1/√(2)[ √(2)β_11^d 21^1_2 β_12^d [32^0_0]_2 -β_12^d [32^0_0]_1; √(2)β_21^d 21^1_2 β_22^d [32^0_0]_2 -β_22^d [32^0_0]_1; 0 β_32^d [52^0_0]_2 -β_32^d [52^0_0]_1 ; ], Y_e = 1/√(2)[ √(2)β_11^e 21^1_2 √(2)β_21^e 21^1_2 0; β_12^e [32^0_0]_2 β_22^e [32^0_0]_2 β_32^e [52^0_0]_2; -β_12^e [32^0_0]_1 -β_22^e [32^0_0]_1 -β_32^e [52^0_0]_1; ]. There are two modular forms 63^0,i, i=1,2, and there is no 1^1_2 at k=4. The Yukawa matrices are rescaled by the canonical normalization of the kinetic terms as in Eq. (<ref>). Since the top Yukawa coupling is 1, we fix the overall factor of the Yukawa couplings, so that the factor (√(2Imτ))^6 is compensated. Note that the normalization of the modular form is not fixed from the bottom-up approach [ We also assume that this normalization factor is universal for all of the matter fields 5_i and 10_i, and hence do not change the flavor structure. ]. We absorb this effect into the coefficients, rather than the modular forms, by defining α̂_i := (2Imτ)^3α_i, β̂_ij := (2Imτ)^3β_ij, γ̂_ij := (2Imτ)^3γ_ij. With the assignments of the modular weights, the texture of the Yukawa matrices are given by Y_u ∼[ η^2 η^1/2^3 η^1/2; η^1/2^3 ^4 ^2; η^1/2 ^2 1 ], Y_d ∼ Y_e^T ∼[ η^2 η^3/2^2 η^3/2; η^2 η^3/2^2 η^3/2; 0 η^1/2^2 η^1/2 ], where η := 1/(2Imτ) is the factor from the canonical normalization. The hierarchical structures of the quark masses and the CKM matrix is given by (m_u, m_c, m_d, m_s, m_b)/m_t ∼ (^4, η^2, η^3/2^2, η^2, η^1/2) ∼ ( 2× 10^-5, 7× 10^-4, 3×10^-4, 2×10^-3, 0.4 ) , (s^Q_12, s^Q_23, s^Q_13) ∼ (/η^1/2, η^1/2, ^2) ∼(0.2, 0.03, 0.005), where = 0.067 and η = 0.16 are used for the numerical estimations. Here, s_ij^Q is the mixing angles in the standard parametrization of the CKM matrix. The mass hierarchies of the charged leptons are the same as those for the down-type quarks. These values well fit to the data shown in Table <ref> at the benchmark points discussed in the next section, except for y_u and y_e which are about 10 larger than the experimental values. These differences will be explained by the numerical factors in the modular forms and the 1 coefficients. It is interesting that the CKM angles in our models fit to the data after taking account the powers of η from the canonical normalization, so that s_23^Q/s_12^Q ∼η∼ 0.2. §.§ Benchmark points We find the numerical values of the parameters by numerical optimization. We restrict the parameter space to be tanβ∈ (5, 60), η_b ∈ (-0.6, 0.6), γ̂_ij∈ (-1,1). Here, tanβ includes the threshold correction to the tau lepton, and η_b is that for the bottom quark as defined in Ref. <cit.>. In this work, we treat η_b as a parameter which will be determined from the soft SUSY breaking parameters, see for threshold corrections Refs. <cit.>. The threshold corrections to the light flavors, η_q and η_ℓ, are assumed to be zero for simplicity. We take R_Σ = 0.1 and restrict γ̂_ij < 1, so that the contribution from the dimension-5 operator is sub-dominant. We found the following two benchmark points. At the first benchmark point (BP1), the inputs are given by tanβ = 11.4643, η_b = 0.187818 τ = 0.0592+3.1033i, [ α̂_1; α̂_2; α̂^1_3; α̂^2_3 ] = [ 2.1054; -1.9005; -1.5069; 1.7427 ] , [ β̂_11; β̂_12; β̂_21; β̂_22; β̂_32 ] = [ 2.2077e^0.1766i; 1.8263; -0.6111; 0.2838; 0.1922 ] , [ γ̂_11; γ̂_12; γ̂_21; γ̂_22; γ̂_32 ] = [ 0.2568; 0.8436; -0.9999; 0.2433; -0.8884 ], and at the second point (BP2), tanβ = 11.4303, η_b = 0.598145, τ = 0.0661+3.0791i, [ α̂_1; α̂_2; α̂^1_3; α̂^2_3 ] = [ 1.5437; -1.4035; -1.4993; 1.3019 ] , [ β̂_11; β̂_12; β̂_21; β̂_22; β̂_32 ] = [ 1.7296e^0.2134i; 1.2887; -0.5323; 0.2484; 0.2092 ] , [ γ̂_11; γ̂_12; γ̂_21; γ̂_22; γ̂_32 ] = [ 0.9997; 0.7726; -0.9332; 0.3163; -0.9619 ]. These points realize the quark and charged lepton masses, and the CKM angles as shown in Table <ref>. At both points, all of the observables are within 2σ range, and the largest discrepancy is 0.61σ (1.93σ) at the BP1 (BP2) for the charm (down) quark mass. The values of the parameters are similar at both points, but η_b is relatively small (large) at the BP1 (BP2). The absolute values of the coefficients are in the range of [0.19, 2,2] and [0.20, 1.7] at BP1 and BP2, respectively. Thus, the 1 coefficients can explain the hierarchies with the good accuracy. § NEUTRINO SECTOR We shall consider the neutrino sector in this section. We assume that the neutrinos are Majorana, so the neutrino masses are given by the Weinberg operator W∋ (ℓ H_u)^2 at low-energies. With our choice of the representations and the weights of ℓ∈5, the masses of the neutrinos and the mixing angles in the PMNS matrix are predicted to be (m_ν_1, m_ν_2, m_ν_3) ∼ (η^2, η^2, 1) ∼ (0.03, 0.03, 1), (s_12, s_23, s_13) ∼ (1, η, η) ∼ (1, 0.2, 0.2), and thus the neutrino observables will have the texture R^21_32 := m_ν_2^2-m_ν_1^2/m_ν_3^2-m_ν_1^2∼η^4 ∼ 0.0007, (s_12^2, s_23^2, s_13^2) ∼ (1, η^2, η^2) ∼ (1, 0.03, 0.03), independently to the UV completion of the Weinberg operator. Hence, the angle s_13^2 is naturally explained, while the ratio of the mass squared differences R^21_32 and s_23^2 are predicted to be about an order of magnitude smaller than the observed values. We will discuss how these discrepancies are explained in an explicit model based on the type-I seesaw mechanism. For illustration, we assume the type-I seesaw mechanism to realize the tiny neutrino masses by introducing the three generations of singlets N_i. The superpotential is given by W = 1/2 N^T M_N N + N^T Y_n 5_A H^A ∋1/2 N^T M_N N + N^T Y_n ℓ H_u. We choose the representations and modular weights of the right-handed neutrinos as N = 1^1_0 ⊕ 2_0 =: N_1 ⊕ N_2, k_N = (0,2). The modular invariant superpotential is given by W = M_0/2[ A_1 N_1 N_1 + 2 A_2( 22_0 N_1 N_2 )_1 + A_3 41^0_0( N_2 N_2)_1 + A_4 (42_0( N_2 N_2)_2_0)_1 ] + ∑_i=1,2[ B_1i(N_15_i)_1 + B_2i(22_0 N_2 5_i )_1 ] H + B_23(42_0 N_2 5_3 )_1 H, where A_i and B_ij are the 1 coefficients, and M_0 is the overall scale of the Majorana mass term. (N_2 N_2)_2_0 takes the combination of the representation 2_0. Here, we take 01^0_0 = 1. There is no 1^0_0 at k=2. The Majorana mass matrix M_N and the neutrino Yukawa matrix Y_n are given by M_N = M_0/2[ 2 A_1 -√(2) A_2 [22_0]_2 √(2) A_2 [22_0]_1; -√(2) A_2 [22_0]_2 √(2) A_3 41^0_0 - A_4 [42_0]_1 A_4 [42_0]_2 ; √(2) A_2 [22_0]_1 A_4 [42_0]_2 √(2) A_3 41^0_0 + A_4 [42_0]_1; ], Y_n = 1/√(2)[ √(2) B_11 √(2) B_12 0; - B_21[22_0]_2 - B_22[22_0]_2 - B_23[42_0]_2; B_21[22_0]_1 B_22[22_0]_1 B_23[42_0]_1 ]. The neutrino mass matrix is given by M_ν = v_u^2 Ŷ_n^T M̂_N^-1Ŷ_n, where Ŷ_n and M̂_N are after the canonical normalization. For the neutrino observables, the elements suppressed by ∼0.05 are irrelevant, and hence the matrices are approximately given by M̂_N/M_0 (2Imτ)^2∼ [ A_1 η^2 0 -A_22η; 0 A_3√(6) + A_44√(2) 0; -A_22η 0 A_3√(6) - A_44√(2) ], Ŷ_n/(2Imτ)^2∼[ B_11η^2 B_12η^2 0; 0 0 0; -B_212η - B_222η -B_234 ]. Since the neutrino mass matrix M_ν is rank-2, the lightest neutrino mass appears only at ^3 [ If N is assigned to be a triplet, the neutrino mass will be rank-1 for → 0, and thus the observed pattern is more difficult to be realized. ]. The ratio of the heavier two neutrinos are given by √(R^21_32)≃m_ν_2/m_ν_3∼16(B_11^2+B_22^2)/B_23^2A_1( A_3/√(6) - A_4/4√(2) - A_2^2/4A_1) η^2, at the leading order in η. Because of the hierarchy in the neutrino masses, the model predicts the normal ordering <cit.>. The ratio is enhanced by (B_11^2+B_22^2)/(B_23/4)^2 ∼10 coming from the Dirac Yukawa matrix Y_n for all A, B= O(1), and thus the ratio of the mass squared difference R^12_23 is enhanced by 100, consistent with the observed value ∼ 0.03. While the mild discrepancy of s_23^2 will be explained simply by 5 ratios of the coefficients. We find the values to explain the neutrino observables at the benchmark points. At the BP1, the fitted values are M_0 = 1.2819× 10^16 GeV, [ A_1; A_2; A_3; A_4 ] = [ 1.8038; -1.3098; 1.0235; -4.0367 ] , [ B_11; B_12; B_21; B_22; B_23 ] = [ -2.4174; 1.0236; 4.0214; 1.0236; 2.6574 ], and at BP2, M_0 = 1.7775× 10^15 GeV, [ A_1; A_2; A_3; A_4 ] = [ 1.8984; -0.6240; -0.4992; 1.1528 ] , [ B_11; B_12; B_21; B_22; B_23 ] = [ 0.8983; -0.7126; 1.8984; 0.6480; -0.3344 ]. At these points, the CP phase in the PMNS matrix is originated from that in the charged lepton Yukawa matrix, which is common to the CKM matrix. We see that the values of the coefficients are close to 1 and the ratio of the coefficients are at most 3.9 (5.7) at the BP1 (BP2). § SUMMARY In this work, we build a model to realize the fermion hierarchy in SU(5) GUT utilizing the modular Γ_6^' symmetry. The residual symmetry associated with the generator T, namely _6^T symmetry, controls the powers of the small parameter e^-2πImτ/6. We argue that the Γ_6^' is the minimal possibility to realize the hierarchical masses of the up-type quarks, because the Yukawa matrix for the up-type quarks is symmetric in the SU(5) GUT. We assign the representations of 10 as 1^0_1⊕ 2^0_2 and 5 as 1^s_b⊕ 1^s_b ⊕ 1^s_b. We have to consider the double covering Γ_6^' to have the representations 2^s_b. The CKM matrix is hierarchical because the T charges of Q ∈ 10 are different, while the PMNS matrix is not hierarchical because the those of ℓ∈5 are the same. The representations of r and (s,b) are chosen such that the realistic Yukawa matrices are realized for the modular weights smaller than 6. We assigned the modular weights, so that certain modular forms exists and the Yukawa matrices are rank 3 up to ^6. In the model, we show that the 1 coefficients without hierarchies explain the observed quark and lepton masses and the CKM elements. We assume the type-I seesaw mechanism to explain the tiny neutrino masses. The singlet right-handed neutrinos are assigned to be 1^1_0 ⊕ 2_0 and their modular weights are chosen to be 0⊕ 2. With these assignments, one of the three neutrinos is suppressed by ^3 compared with the other two heavier neutrinos. The mass ratio of the heavier two neutrinos is not suppressed by , but is suppressed by (Imτ/2)^2 ∼10 due to the canonical normalization, see Eq. (<ref>). We showed that the 1 coefficients can explain the neutrino observables in the SU(5) GUT model. § ACKNOWLEDGMENT The work of J.K. is supported in part by the Institute for Basic Science (IBS-R018-D1). This work is supported in part by he Grant-in-Aid for Scientific Research from the Ministry of Education, Science, Sports and Culture (MEXT), Japan No. JP22K03601 (T.H.) and JP23K03375 (T.K.). The work of Y.A. is supported by JSPS Overseas Research Fellowships. § Γ_6^' MODULAR SYMMETRY §.§ Group theory of Γ_6^' The algebra of Γ_6^' is given by <cit.> S^2 = R, TR=RT, R^2 = T^6 = (ST)^3 = ST^2ST^3 ST^4ST^3=1. That for Γ_6 = Γ_6^'/^R_2 is given by taking R=1. The irreducible representations of Γ_6^' are given by 1^s_b, 2_b, 3^s, 6, and 2^s_b, 4_b, where s=0,1 and b=0,1,2 correspond to the S_3 and A_4^' indices respectively, since Γ_6^' is isomorphic to S_3× A_4^'. The latter two representations exist only in Γ_6^'. The representation matrices are given by ρ_S(1^s_b) = (-1)^s, ρ_T(1^s_b) = (-1)^s w^b, for the singlet 1^s_b, ρ_S(2_b) = 1/2[ - 1 √(3); √(3) 1 ], ρ_T(2_b) = w^b [ 1 0; 0 -1 ], for the doublet 2_b, ρ_S(3^s) = (-1)^s 1/3[ -1 2 2; 2 -1 2; 2 2 -1 ], ρ_T(3^s) = (-1)^s [ 1 0 0; 0 w 0; 0 0 w^2 ], for the triplet 3^s and ρ_S(6) = 1/2[ - ρ_S(3^0) √(3)ρ_S(3^0); √(3)ρ_S(3^0) ρ_S(3^0); ], ρ_T(6) = diag(1,w,w^2,-1, -w,-w^2 ), for the sextet 6, where w := e^2π i/3. The representation matrices of 2^s_b and 4_b are respectively given by ρ_S(2^s_b) = (-1)^s i/√(3)[ 1 √(2); √(2) -1 ], ρ_T(2^s_b) = (-1)^s w^b [ 1 0; 0 w ], and ρ_S(4_b) = 1/2[ -ρ_S(2^0_b) √(3)ρ_S(2^0_b); √(3)ρ_S(2^0_b) ρ_S(2^0_b) ], ρ_T(4_b) = [ ρ_T(2^0_b) 0; 0 -ρ_T(2^0_b) ]. Note that the definition of 2^s_b is different from that in Ref. <cit.>, where ρ_T(2^s_b) ∝ w^b+1. The direct product of the singlets is given by 1^s_b ⊗ 1^t_c = 1^s+t_b+c. Here and hereafter, s+t (b+c) should be understood as modulo 2 (3). The products involving the singlet are given by 1^s_b(α) ⊗ 2_c (β) = P_2^s [ αβ_1; αβ_2 ]_2_b+c, 1^s_b (α) ⊗ 3^s(β) = P_3^b [ αβ_1; αβ_2; αβ_3 ]_3^r+s, 1^s_b (α) ⊗ 6(β) = P_sb[ αβ_1; αβ_2; αβ_3; αβ_4; αβ_5; αβ_6 ]_6, and 1^s_b ⊗ 2^t_c = 2^s+t_b+c, 1^s_b(α) ⊗ 4_c(β) = P_4^s [ αβ_1; αβ_2; αβ_3; αβ_4; ]_4_b+c, where P_2 := [ 0 1; -1 0 ], P_3 := [ 0 0 1; 1 0 0; 0 1 0 ], P_sb := [ 0 𝕀3; -𝕀3 0 ]^s [ P_3 0; 0 P_3 ]^b, P_4 := [ 0 𝕀2; -𝕀2 0 ]. The products of the doublets are given by [ α_1; α_2 ]_2_b⊗[ β_1; β_2 ]_2_c = 1/√(2)(α_1β_1+α_2β_2)_1^0_b+c⊕1/√(2)(α_1β_2-α_2β_1)_1^1_b+c⊕1/√(2)[ α_2β_2-α_1β_1; α_1β_2+α_2β_1 ]_2_b+c, [ α_1; α_2 ]_2^s_b⊗[ β_1; β_2 ]_2_c = P_4^s[ α_1β_1; α_2 β_1; α_1 β_2; α_2β_2 ]_4_b+c, [ α_1; α_2 ]_2^s_b⊗[ β_1; β_2 ]_2^t_c = 1/√(2)(α_1β_2-α_2β_1)_1^s+t_b+c+1⊕ P^b+c_3 1/√(2)[ -√(2)α_1β_1; α_1β_2 +α_2β_1; √(2)α_2β_2 ]_3^s+t. The products of the doublet and triplet are [ α_1; α_2 ]_2_b⊗[ β_1; β_2; β_3 ]_3^s = P_sb[ α_1 β_1; α_1 β_2; α_1 β_3; α_2 β_1; α_2 β_2; α_2 β_3; ]_6, [ α_1; α_2 ]_2^s_b⊗[ β_1; β_2; β_3 ]_3^t = 1/√(3)[ α_1β_1+√(2)α_2β_3; √(2)α_1β_2-α_2β_1 ]_2^s+t_b⊕1/√(3)[ α_1β_2+√(2)α_2β_1; √(2)α_1β_3-α_2β_2 ]_2^s+t_b+1 ⊕1/√(3)[ α_1β_3+√(2)α_2β_2; √(2)α_1β_1-α_2β_3 ]_2^s+t_b+2, and the product of the triplets is given by [ α_1; α_2; α_3 ]_3^s⊗[ β_1; β_2; β_3 ]_3^t = 1/√(3)(α_1β_1+α_2β_3+α_3β_2)_1^s+t_0⊕1/√(3)(α_3β_3+α_1β_2+α_2β_1)_1^s+t_1 ⊕ 1/√(3)(α_2β_2+α_3β_1+α_1β_3)_1^s+t_2 ⊕1/√(6)[ 2α_1β_1-α_2β_3-α_3β_2; 2α_3β_3-α_1β_2-α_2β_1; 2α_2β_2-α_3β_1-α_1β_3; ]_3^s+b_S ⊕ 1/√(2)[ α_2β_3-α_3β_2; α_1β_2-α_2β_1; α_3β_1-α_1β_3; ]_3^s+b_A. The product rules involving the higher dimensional representations are shown in Ref. <cit.>. §.§ Modular forms The modular forms at k=1 are given by <cit.> Y^(1)_2^0_0 = [ Y_1; Y_2 ]_2^0_0 = [ 3e_1 + e_2; 3√(2)e_1 ], Y^(1)_4_2 = [ Y_3; Y_4; Y_5; Y_6 ]_4_2 = [ 3√(2) e_3; -3e_3 - e_5; √(6) e_3 -√(6)e_6; -√(3)e_3 + 1/√(3) e_4 - 1/√(3) e_5+√(3)e_6 ], where the functions are defined as e_1(τ):= η(3τ)^3/η(τ), e_2(τ):= η(τ/3)^3/η(τ), e_3(τ):= η(6τ)^3/η(2τ), e_4(τ):= η(τ/6)^3/η(τ/2), e_5(τ):= η(2τ/3)^3/η(2τ), e_6(τ):= η(3τ/2)^3/η(τ/2). The modular forms with higher weights can be constructed by taking the direct products. The modular forms at weight k=2 are Y^(2)_1^1_2 = (Y_3Y_6-Y_4Y_5)_1^1_2, Y^(2)_2_0 = 1/√(2)[ Y_1Y_4-Y_2Y_3; Y_1Y_6-Y_2Y_5 ], Y^(2)_3^0 = [ -Y_1^2; √(2)Y_1Y_2; Y_2^2 ], Y^(2)_6 = 1/√(2)[ Y_1Y_4+Y_2Y_3; √(2)Y_2Y_4; -√(2)Y_1Y_3; Y_1Y_6+Y_2Y_5; √(2) Y_2Y_6; -√(2) Y_1Y_5 ]. At weight k=3, Y^(3) _2^0_0 = 1/√(3)[ -Y_1^3+√(2)Y_2^3; 3Y_1^2Y_2 ], Y^(3) _2^0_2 = 1/√(3)[ 3Y_1Y_2^2; -√(2) Y_1^3-Y_2^3 ], Y^(3) _2^1_2 = (Y_3Y_6-Y_4Y_5) [ Y_1; Y_2 ], Y^(3)_4_0 = √(2/3)[ Y_1Y_2Y_3 - Y_1^2 Y_4; Y_2^2 Y_3 - Y_1Y_2Y_4; Y_1Y_2Y_5 - Y_1^2 Y_6; Y_2^2 Y_5 - Y_1Y_2Y_6 ], Y^(3)_4_1 = 1/√(3)[ Y_2^2Y_3+2Y_1Y_2Y_4; -Y_2^2Y_4-√(2)Y_1^2Y_3; Y_2^2Y_5+2Y_1Y_2Y_6; -Y_2^2Y_6-√(2)Y_1^2Y_5 ], Y^(3)_4_2 = 1/√(3)[ -Y_1^2Y_3 + √(2)Y_2^2Y_4; Y_1^2Y_4 + 2Y_1Y_2Y_3; -Y_1^2Y_5 + √(2)Y_2^2Y_6; Y_1^2Y_6 + 2Y_1Y_2Y_5 ], and at weight k=4, Y^(4)_1_0^0 = 1/√(3) Y_1 (Y_1^3 + 2√(2) Y_2^3), Y^(4)_1_1^0 = (Y_3Y_6-Y_4Y_5)^2, Y^(4)_2_0 = 1/2√(2)[ (Y_1Y_6-Y_2Y_5)^2-(Y_1Y_4-Y_2Y_3)^2; 2(Y_1Y_4-Y_2Y_3)(Y_1Y_6-Y_2Y_5) ], Y^(4)_2_2 = 1/√(2)(Y_3Y_6-Y_4Y_5) [ Y_1Y_6-Y_2Y_5; Y_2Y_3-Y_1Y_4 ], Y^(4)_3^0 = √(2/3)[ Y_1(Y_1^3-√(2) Y_2^3); Y_2(Y_2^3+√(2) Y_1^3); 3Y_1^2Y_2^2 ], Y^(4)_3^1 = (Y_3Y_6-Y_4Y_5) [ √(2)Y_1Y_2; Y_2^2; -Y_1^2 ], Y^1(4)_6,1 = 1/√(2)(Y_3Y_6-Y_4Y_5) [ √(2)Y_2Y_6; -√(2)Y_1Y_5; Y_1Y_6+Y_2Y_5; - √(2)Y_2Y_4; √(2)Y_1Y_3; -Y_1Y_4-Y_2Y_3 ], Y^2(4)_6,2 = 1/√(2)[ -(Y_1Y_4-Y_2Y_3) Y_1^2; √(2)(Y_1Y_4-Y_2Y_3) Y_1Y_2; (Y_1Y_4-Y_2Y_3) Y_2^2; -(Y_1Y_6-Y_2Y_5) Y_1^2; √(2)(Y_1Y_6-Y_2Y_5) Y_1Y_2; (Y_1Y_6-Y_2Y_5) Y_2^2; ]. At k=5, 52^0_0 = 1/√(3)Y_1(Y_1^3+2√(2)Y_2^3) [ Y_1; Y_2 ], 52^0_1 = (Y_3Y_6-Y_4Y_5)^2 [ Y_1; Y_2 ], 52^0_2 = 1/3[ -5Y_1^3Y_2^2 - √(2)Y_2^5; 5Y_1^2Y_2^3 - √(2) Y_1^5 ], 52^1_1 = 1/√(3)(Y_4Y_5-Y_3Y_6) [ -3Y_1Y_2^2; √(2)Y_1^3 + Y_2^3 ], 52^1_2 = 1/√(3)(Y_3Y_6-Y_4Y_5) [ -Y_1^3+√(2)Y_2^3; 3Y_1^2Y_2 ], 54_0,1 = 1/√(3)(Y_3Y_6-Y_4Y_5) [ Y_2^2Y_5+2Y_1Y_2Y_6; -Y_2^2Y_6-√(2)Y_1^2Y_5; -Y_2^2Y_3-2Y_1Y_2Y_4; Y_2^2Y_4+√(2)Y_1^2Y_3 ], 54_0,2 = (Y_3Y_6-Y_4Y_5)^2 [ Y_3; Y_4; Y_5; Y_6 ], 54_1 = 1/√(3)(Y_3Y_6-Y_4Y_5) [ -Y_1^2Y_5+√(2)Y_2^2Y_6; Y_1^2Y_6 + 2Y_1Y_2Y_5; Y_1^2Y_3-√(2)Y_2^2Y_4; -Y_1^2Y_4-2Y_1Y_2Y_3; ], 54_2,1 = √(2/3)(Y_3Y_6-Y_4Y_5) [ Y_1Y_2Y_5-Y_1^2Y_6; Y_2^2Y_5 - Y_1Y_2Y_6; -Y_1Y_2Y_3+Y_1^2Y_4; -Y_2^2Y_3 + Y_1Y_2Y_4; ], 54_2,2 = 1/√(3)Y_1(Y_1^3+2√(2)Y_2^3) [ Y_3; Y_4; Y_5; Y_6 ]. At weight k=6, 61^0_0 = 1/4√(2)(Y_1Y_4-Y_2Y_3) { 3(Y_1Y_6-Y_2Y_5)^2-(Y_1Y_4-Y_2Y_3)^2}, 61^1_0 = (Y_3Y_6-Y_4Y_5)^3, 61^1_2 = 1/√(3)Y_1(Y_1^3+2√(2)Y_2^3) (Y_3Y_6-Y_4Y_5), 62_0 = 1/√(6) Y_1(Y_1^3+2√(2)Y_2^3) [ Y_1Y_4-Y_2Y_3; Y_1Y_6-Y_2Y_5 ], 62_1 = 1/√(2)(Y_3Y_6-Y_4Y_5)^2 [ Y_2Y_3-Y_1Y_4; Y_2Y_5-Y_1Y_6 ], 62_2 = 1/2√(2)(Y_3Y_6-Y_4Y_5) [ 2(Y_1Y_4-Y_2Y_3)(Y_1Y_6-Y_2Y_5); (Y_1Y_4-Y_2Y_3)^2-(Y_1Y_6-Y_2Y_5)^2 ], 63^0,1 = 1/√(3) Y_1(Y_1^3+2√(2)Y_2^3) [ -Y_1^2; √(2)Y_1Y_2; Y_2^2 ], 63^0,2 = 1/√(3)[ Y_2^6-2√(2)Y_1^3Y_2^3; 2√(2)Y_1^5Y_2 - Y_1^2Y_2^4; -4Y_1^4Y_2^2 + √(2)Y_1Y_2^5 ], 63^1 = √(2/3)(Y_3Y_6-Y_4Y_5) [ Y_2(Y_2^3+√(2)Y_1^3); 3Y_1^2Y_2^2; Y_1(Y_1^3-√(2)Y_2^3) ], 66,1 = 21^1_2⊗46,1, 66,2 = 21^1_2⊗46,2, 66,3 = 22_0⊗43^0. The representations existing for k≤ 6 are summarized in Table <ref>. At τ∼ i∞, it is convenient to explained the modular forms by q := e^2π i τ whose absolute values is small. The q-expansions of the functions e_i are given by e_1 = q^1/3 + q^4/3, e_2 = 1-3q^1/3 + 6q - 3q^4/3, e_3 = q^2/3, e_4 = 1-3q^1/6+6q^1/2-3q^2/3 -6q^7/6 + 6q^3/2, e_5 = 1-3q^2/3, e_6 = q^1/6+q^2/3 + 2q^7/6, where q^2 is neglected. Defining := √(3)q^1/6, the modular forms up to ^5 are given by 12^0_0 = [ Y_1; Y_2 ]≃[ 1; √(2)^2 ], 14_2 = [ Y_3; Y_4; Y_5; Y_6; ]∼1/3[ √(2)^4; -3; -3√(2); 2^3 ], at k=1. For reference, we show the expansions of the modular forms of up to the three dimensional representations for k≤ 6. At k=2, Y^(2)_1^1_2∼ -√(2), Y^(2)_2_0∼1/3√(2)[ -3; 8 ^3 ], 23_0 = [ -1; 2^2; 2^4 ]. At k=3, 32^0_0∼ 1/√(3)[ -1; 3√(2)^2 ], 32^0_2∼1/√(3)[ 6^4; -√(2) ], 32^1_2∼ - √(2)[ ; √(2)^3 ]. At k=4, 41^0_0∼ 1/√(3), 41^0_1∼ 2^2, 42_0∼ -1/6√(2)[ 3; 16^3 ], 42_2∼ -1/3[ 8^4; 3 ], 43^0∼ √(2/3)[ 1; 2^2; 6 ^4 ], 43^1∼√(2)[ -2^3; -2^5; ]. At k=5, 52^0_0∼ 1/√(3)[ 1; √(2)^2 ], 52^0_1∼[ 2^2; 2√(2)^4 ], 52^0_2∼ -1/3[ 10^4; √(2) ], 52^1_1∼ 1/√(3)[ -6√(2)^5; 2 ], 52^1_2∼1/√(3)[ √(2); -6^3 ]. At k=6, 61^0_0∼ 1/4√(2), 61^1_0∼ -2√(2)^3, 61^1_2∼ -√(2/3), 62_0∼ 1/3√(6)[ -3; 8^3 ], 62_1∼1/3[ 3√(2)^2; -8√(2)^5 ], 62_2∼1/6[ 16^4; -3 ], 63^0,1∼ 1/√(3)[ -1; 2^2; 2^4 ], 63^0,2∼1/√(3)[ 0; 4^2; -8^4 ], 63^1∼1/√(3)[ -4^3; -12^5; -2 ]. JHEP
http://arxiv.org/abs/2307.01531v1
20230704073452
Percolation in Networks of Liquid Diodes
[ "Camilla Sammartino", "Yair Shokef", "Bat-El Pinchasik" ]
cond-mat.soft
[ "cond-mat.soft", "physics.flu-dyn" ]
fancy plain plain iblabel[1]#1 akefntext[1] [0pt][r]thefnmark #1 1.125 * § 0pt4pt4pt * §.§ 0pt15pt1pt [ \begin@twocolumnfalse Percolation in Networks of Liquid Diodes Camilla Sammartino^a, Yair Shokef^a,b,c,d, and Bat-El Pinchasik^a,b \end@twocolumnfalse ] § ^a School of Mechanical Engineering, Tel Aviv University, Tel Aviv 69978, Israel. E-mail: [email protected] , [email protected] , [email protected] ^b Center for Physics and Chemistry of Living Systems, Tel Aviv University, Tel Aviv 69978, Israel. ^c Center for Computational Molecular and Materials Science, and Center for Physics and Chemistry of Living Systems, Tel Aviv University, Tel Aviv 69978, Israel. ^d International Institute for Sustainability with Knotted Chiral Meta Matter, Hiroshima University, Japan. Liquid diodes are surface structures that facilitate the flow of liquids in a specific direction. When these structures are within the capillary regime, they promote liquid transport without the need for external forces. In nature, they are used to increase water collection and uptake, reproduction, and feeding. While nature offers various one-dimensional channels for unidirectional transport, networks with directional properties are exceptional and typically limited to millimeters or a few centimeters. In this study, we simulate, design and 3D print liquid diode networks consisting of hundreds of unit cells. We provide structural and wettability guidelines for directional transport of liquids through these networks, and introduce percolation theory in order to identify the threshold between a connected network, which allows fluid to reach specific points, and a disconnected network. By constructing well-defined networks that combine uni- and bi-directional pathways, we experimentally demonstrate the applicability of models describing isotropically directed percolation. By varying the surface structure and the solid-liquid interfacial tension, we precisely control the portion of liquid diodes and bidirectional connections in the network and follow the flow evolution. We are, therefore, able to accurately predict the network permeability and the liquid's final state. These guidelines are highly promising for the development of structures for spontaneous, yet predictable, directional liquid transport. § INTRODUCTION Percolation theory is used to describe critical phenomena in multiple types of complex physical systems<cit.> such as flow through porous or granular media<cit.>, electrical conductivity<cit.>, spreading of fires<cit.>, vascular networks<cit.>, biomolecular transport<cit.>, jamming of particulate systems <cit.>, and even the formation and release of traffic jams<cit.>. These phenomena can be described by different percolation models<cit.>, through the formation of connected clusters and networks. Well-defined, predictable experimental realizations of these models, however, still remain an open field of research, with potential applications in fluidics, electronics, power grids, epidemics, and biology<cit.>. To date, experimental efforts have focused on percolation of electrical conductivity<cit.>, mainly using the bond percolation model. In this model, the network is characterized by the probability p of having a bond between two neighboring sites. Similarly, p_0 = 1 - p is the probability of having a missing bond, namely a vacancy. If enough bonds are present, an infinite connected cluster is formed, the network is connected and percolates. The critical probability of existing bonds, p_c, defines the threshold between the non-percolating and percolating phases<cit.>. In classic, or random, bond percolation, all bonds are bi-directional. Consequently, percolation through the network is isotropic. In directed percolation, on the other hand, the bonds are directional, and allow transport only in a preferred direction in the network<cit.>. Thus, such a system is connected anisotropically. An example of such directional transport in nature is seen in the Texas horned lizard. This desert lizard increases its water collection and intake by passive water transport through a network of directional channels on its scales, directing the water to the mouth over distances of a few centimeters<cit.>. Another, more recently introduced, percolation model corresponds to isotropically directed percolation<cit.>. There, bonds in opposite directions exist with a total probability p_1, together with bi-directional bonds, with probability p_2, and vacancies with probability p_0 = 1 - p_1 - p_2. This type of percolation is governed by the probability to find a bond connecting two neighboring sites<cit.>, defined as p_nn=p_1/2+p_2. Isotropically directed percolation has been used to analyze traffic patterns in New York and London<cit.>, but more experimental work is needed to further understand the applicability of the theory to real-life situations. In this study, we introduce an experimental realization of isotropically directed percolation using a network of three-dimensional (3D) printed network of liquid diodes. The liquid diodes comprise 3D surface structures that promote spontaneous uni-directional liquid flow in the capillary regime<cit.>. We design a 2D network made of millimeter-size open channels<cit.> that set the bonds between neighboring sites. By tuning geometric features of the liquid diodes and the contact angle (CA) that the flowing liquid creates with the surface, we control the diodicity of the bonds. Namely, whether the flow through a bond is uni- or bi-directional. As a result, we are able to control the number of directional bonds, scan over different values of p_nn, switch between different percolation states and find the percolation threshold for each configuration of the network. Establishing a well-defined physical system that enables fine-tuning of percolation parameters enables us to gain new insights into the fundamentals of directional transport phenomena in general<cit.>, and isotropically directed percolation specifically. This includes direct measurements of the flow dynamics through the network, and studying the influence of the system size and initial conditions of the fluid on its final state and configuration. In addition, controlling the percolation threshold and being able to tune the permeability of a network opens new horizons for designing microfluidic complex networks for mixing and separation<cit.>, heat transfer<cit.> and actuation<cit.>. § RESULTS AND DISCUSSION §.§ Liquid diodes networks: design and arrangement Figure <ref> shows the liquid diodes used in this work<cit.> and how they are implemented to form 2D disordered isotropically directed networks. Each diode features an asymmetric geometry comprising four main components (Figure <ref>a): the entrance channel (hilla), a central area (bulga), an exit channel (orifice), and, in blue, an ellipsoid-shaped bump (pitch), shown in more detail in the inset of Figure <ref>a. The bulga is aligned perpendicularly to the orifice, creating a 90^∘ expansion in the channel’s width. This expansion, together with the pitch, creates a pressure barrier that pins the liquid in the backward direction (left to right). The height of the pitch is defined in terms of percentage of the channel’s depth, and in our networks, it varies between 10% and 60%, in steps of 10%. Different flow regimes arise, depending on the combination of the pitch height and the CA of the flowing liquid, summarized in the phase diagram shown in Figure <ref>b. In green is the diodic regime, where the flow is unidirectional. Figure <ref>c depicts timeframes of liquid flow in a channel made of several consecutive unit cells with 40% pitch height and a CA of 45^∘. By lowering the CA (i.e. increasing the liquid wettability<cit.>), the diodes start exhibiting flow in the backward direction and the diodicity breaks (Figure <ref>c, left). The dependence of the diodes' performance on the pitch height is associated with the local reduction and following expansion of the channel’s depth, caused by the pitch, when the liquid flows in the backward direction<cit.>. For example, with a pitch height of 40%, the channel depth is locally reduced to 60% its original value. After the pitch, the channel's depth returns to its full value (100%). This creates an additional pressure barrier when liquid propagates in the backward direction and has to overcome the pitch (see Figure <ref>). In the topmost part of the phase diagram, in dark blue, we observe no flow in either direction due to the poor wettability of the liquid. Figure <ref>d shows an isotropically directed 5×5 network comprising diodes of different pitch heights and orientations. We created three networks, 15×15 in size, with p_0 values of 0.2, 0.31 and 0.37, featuring, 336, 288 and 264 bonds, respectively, using diodes with pitch heights ranging from 10% to 60%, in steps of 10%. Within a given sample, we have the same number of diodes per pitch height and orientation. Hence, at each site, the probability of a unidirectional bond in each direction is uniform and the network is isotropic. The distribution of vacancies and diodes, in each direction, is randomized (see Materials and Methods). As we change the CA of the flowing liquid, a fraction of the bonds switches behavior, according to the phase diagram in Figure <ref>b, allowing us to scan over different values of p_nn for each sample of a given p_0. This is illustrated in Figure <ref>, using schematics of the three samples with different p_0, for increasing p_nn values, or decreasing CAs. As p_nn increases, the number of bidirectional bonds, indicated by magenta two-way arrows (Figure <ref>), increases, creating a larger connected cluster. §.§ Spontaneous directional flow in liquid diodes networks: simulations and experiments We first use numerical simulations to predict the flow pattern and the liquid permeability through the network. For each network configuration, we insert the "liquid" at a specific feeding site and let it propagate spontaneously, until the "liquid" fronts halt. Once the "liquid" reaches the final state, we extract the fraction of occupied sites f. This was repeated for eight randomly chosen feeding sites, two for each edge of the network, and for each CA. The simulations included all the possible feeding sites along the edges of the network. We then verify our predictions by flow experiments in the 3D-printed liquid diodes networks. Figure <ref> shows simulations (upper row) and experiments (bottom row) of the liquid final states with increasing p_nn (0.34, 0.51, 0.63), for p_0= 0.31. The red arrows denote the feeding sites (identical in all cases). When p_nn increases, a larger portion of the network is covered with liquid, a result of the increasing number of bidirectional bonds. Videos S1, S2 and S3 show the propagation dynamics of the three experiments. This leads to a phase transition between a non-percolating (LHS) and a percolating state (RHS). The liquid distribution is isotropic. Namely, no preferred direction is observed, and the liquid spreads homogeneously in all directions. We find an excellent agreement between the simulations and experiments. Yet, a few local failures in the diodes were observed due to pressure buildup. This additional pressure renders unidirectional bonds bidirectional, promoting flow in the backward direction and local breakdown of diodicity<cit.>. We now examine the dependence of f, the fraction of occupied sites, on p_nn, as shown in Figure <ref>. We observe the typical S-shape of percolation curves<cit.>. The three curves, for the three different p_0 values, show identical behavior and a notable degree of collapse. Simulations (circles) and experiments (squares) show excellent agreement. For each set of experiments, with a specific CA (i.e., p_nn), the fraction of occupied sites is averaged by taking the median of all experiments. This was done to account for discrepancies in the outcomes of experiments with the same p_nn but different feeding sites. In fact, not all sites on the border result necessarily in a similar connected path and, hence, a similar f. Using the median reflects the distribution of the results and gives each experiment the appropriate weight, according to how often a specific outcome occurred. The percolation threshold lies around p_nn=1/2, as expected for the square lattice<cit.>. The variability seen in the data collapse is due to the moderate system size of the 3D-printed networks<cit.>. The numerical simulations show that increasing the system size results in a better data collapse and a sharper percolation transition. This is manifested in Figure <ref> by the magenta plots, for system sizes of 15×15, 100×100 and 1000×1000. Each different marker shape represents a different p_0 value. Finally, we investigate the effect of different feeding protocols on the shape of the percolation curves for finite system sizes. The blue plots in Figure <ref> represent percolation curves for increasing system size, obtained through numerical simulations, in which an entire edge of the network was fed rather than a single site on the edge. In this way, only four different initial conditions are averaged, in contrast to fifty-six in the case of single feeding sites. This feeding method results in a less sharp transition and a more symmetrical S-shape of the curve. The discrepancy between the two methods, more acute for small p_nn values, becomes less significant as the system size increases. Indeed, feeding the system through an entire edge results in an increased number of occupied sites for small p_nn values. This increase is significant for moderate (15×15) to medium (100×100) system sizes but becomes minute for larger (1000×1000) networks. § CONCLUSIONS In this work, we successfully conducted isotropically directed percolation experiments of fluid flow in 2D networks of liquid diodes. Fine-tuning of the liquid diodes geometry enabled us to control and manipulate the network's statistical properties. Namely, to precisely control the amount of unidirectional and bidirectional bonds, and therefore, the flow patterns and propagation through the network. The excellent agreement between numerical simulations and experiments for finite-size systems of 15×15 sites validates the integrity of the simulations for larger scales. We have gained new insights into the impact of the feeding method (initial conditions) on the percolation transition. When feeding the system from a single site, percolation curves are steeper and the phase transition is more abrupt than feeding the system from an entire edge. This becomes less significant as we increase the system size and should be taken into account when analyzing or modeling real-life systems of moderate sizes. Additionally, we showed controlled passive transport of liquids in two dimensions over distances of about 15 cm. This is a testimony to the potential of liquid diodes not only as tools to investigate fundamental scientific questions, but also to fabricate devices with a high degree of design flexibility. Further work, using different fabrication methods, may allow us to scale down the dimensions of the unit cells and sites, resulting in bigger networks, featuring thousands of bonds. § MATERIALS AND METHODS §.§ 3D Printed Sample Design The liquid diodes design is based on our previous work <cit.>. The percentage of the pitch height was marked next to each unit cell for better visualization of the final sample. To design 2D networks, a computerized script was used to create a randomized isotropic distribution of diodes with various pitch heights for each value of p_0 (number of vacancies). Namely, for every pitch height, the number of diodes in each orientation (right, left, up, down) is identical. This way, from any random site within the network, there is a uniform probability of propagating in any direction. The nodes of the network are designed as a 4-point star with filleted edges (Figure 1d, i). This way, each junction to the neighboring diode, when present, is a wedge, and liquid is free to flow along the edges and around the corners. This proved to be more effective than a simple 90^∘ cross design, as the liquid would get potentially pinned at the intersections. The size of each sample is 15X15 sites. This was the most convenient size in terms of fabrication and experiments. §.§ Sample and Material Preparation Samples were 3D-printed, using the ProJet® MJP 2500 Series by 3D Systems (Rock Hill, South Carolina, USA), a multi-jet 3D printer. The material used for the printing is the VisiJet® M2R-CL, a transparent polymer. Each sample, for each p_0 value, was printed twice to check repeatability. After printing, the parts were cleaned to remove the supporting wax material. First, the support wax was dissolved in canola oil at 60^∘C. The sample was then placed in a second heated oil bath for finer wax removal. The oil residues were later washed away in soapy water at 60^∘C (all-purpose liquid detergent soap, soap to water volume ratio corresponds to 0.3:1). A small brush was used to gently clean the residues in narrow voids, without damaging small features. Soap residues were thoroughly rinsed in deionized (DI) water, and the sample was dried in open air and lastly rinsed with Ethanol and dried with an air gun. In all experiments, we used dyed DI water obtained by adding green food coloring (Maimon’s, Be’er Sheva, Israel) in a volume ratio of 0.05:1 (dye to water). Small quantities of all-purpose liquid detergent soap, ranging from a volume ratio of 0.02:1 to 0.07:1, were added to the water in order to decrease the native contact angle of the liquid. Six solutions with different contact angles were prepared and used in the experiments. The cleanliness of the sample surface was crucial for good performance and minimal unexpected pinning of the liquid. After each experiment, samples were rinsed with DI water and ethanol to remove all soap residues, and were then let to dry completely. §.§ Contact Angle Measurements The CA of the liquids was measured using a contact angle goniometer OCA25 by DataPhysics Instruments GmbH (Filderstadt, Germany). For each of the six liquids, five measurements were taken of 2 µL drops from different areas of the surface. The CA was checked before each experiment. §.§ Percolation Experiments and Image Analysis For each sample and for each CA (thus p_nn value), eight random sites along the outer edges, namely two sites per edge, were picked as the feeding sites for percolation experiments, for a total of 144 experiments. Liquid of a known contact angle was slowly poured into the feeding site using a Transferpette® S pipettes by BRAND GMBH + CO KG (Wertheim, Germany), in steps of 20 µL. This method enabled us to avoid pressure buildup around the feeding site, with consequent possible failures of neighboring diodes. Experiments were recorded from above using a Panasonic DC-S1 camera with Sigma 70 mm F2.8 DG Macro lens. Screenshots of videos and the fraction of occupied sites were obtained using a MATLAB script, adjusted from the one used in our previous work<cit.>. The algorithm separates the final frame of the video into three RGB channels and subtracts the green from the red, highlighting only the liquid front with respect to the sample background. The frame is turned into a binary black-and-white image, which is divided into squares, each comprising a diode, a site, or a vacancy. For each square comprising a site, the mean of the central pixels is calculated. If the mean is larger than 0.3 (mostly white pixels), then the site is empty. If the mean is lower than 0.3, the site is counted as filled by the liquid. For each set of experiments for a specific p_nn value, the median of the fraction of occupied sites over the eight experiments was taken. §.§ Numerical Simulations The numerical simulations reproduce the 3D-printed sample designs. For each CA, the distributions of the uni-directional and bi-directional bonds are well-defined, based on the phase diagram in Figure <ref>b. The p_nn value of each design is calculated and recorded. The network is then fed from a single site on the outer border and the occupancy (fraction of occupied bonds) is computed using a pass/no-pass function according to the directionality of each bond. This is repeated for every site on the border, and the median of all 56 experiments is calculated. The fraction f of occupied sites is then plotted against the corresponding p_nn value. § AUTHOR CONTRIBUTIONS CS, YS and BP jointly conceived of the research. CS designed and prepared the samples, performed the experiments and simulations, and analyzed the results. YS provided theoretical guidance. BP provided experimental guidance. CS, YS and BP jointly wrote the paper. § CONFLICTS OF INTEREST There are no conflicts to declare. § ACKNOWLEDGMENTS This research was supported by the Israel Science Foundation (grant No. 1323/19). rsc
http://arxiv.org/abs/2307.03347v1
20230707014802
Distilling Universal and Joint Knowledge for Cross-Domain Model Compression on Time Series Data
[ "Qing Xu", "Min Wu", "Xiaoli Li", "Kezhi Mao", "Zhenghua Chen" ]
cs.LG
[ "cs.LG" ]
Investigation of the ND̅ system in quark delocalization color screening model Jialun Ping^4 Abstract 0.9 We review the modular flavor symmetric models of quarks and leptons focusing on our works. We present some flavor models of quarks and leptons by using finite modular groups and discuss the phenomenological implications. The modular flavor symmetry gives interesting phenomena at the fixed point of modulus. As a representative, we show the successful texture structure at the fixed point τ = ω. We also study CP violation, which occurs through the modulus stabilization. Finally, we study SMEFT with modular flavor symmetry by including higher dimensional operators. =================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== For many real-world time series tasks, the computational complexity of prevalent deep leaning models often hinders the deployment on resource-limited environments (e.g., smartphones). Moreover, due to the inevitable domain shift between model training (source) and deploying (target) stages, compressing those deep models under cross-domain scenarios becomes more challenging. Although some of existing works have already explored cross-domain knowledge distillation for model compression, they are either biased to source data or heavily tangled between source and target data. To this end, we design a novel end-to-end framework called UNiversal and joInt Knowledge Distillation (UNI-KD) for cross-domain model compression. In particular, we propose to transfer both the universal feature-level knowledge across source and target domains and the joint logit-level knowledge shared by both domains from the teacher to the student model via an adversarial learning scheme. More specifically, a feature-domain discriminator is employed to align teacher's and student's representations for universal knowledge transfer. A data-domain discriminator is utilized to prioritize the domain-shared samples for joint knowledge transfer. Extensive experimental results on four time series datasets demonstrate the superiority of our proposed method over state-of-the-art (SOTA) benchmarks. The source code is available at https://github.com/ijcai2023/UNI_KD https://github.com/ijcai2023/UNI_KD. § INTRODUCTION Deep learning (DL) models, particularly convolutional neural networks (CNNs), have achieved remarkable successes in various time series tasks, such as human activity recognition (HAR) <cit.>, sleep stages classification <cit.> and fault diagnosis <cit.>. These advanced DL models are often over-parameterized for better generalization on unseen data <cit.>. However, deploying those models on a resource-limited environment (e.g., smartphones and robots) is a common requirement for many real-world applications. The contradiction between model performance and complexity leads to the exploration of various model compression techniques, such as network pruning and quantization <cit.>, network architecture search (NAS) <cit.> and knowledge distillation (KD) <cit.>. Among them, KD has demonstrated its superior effectiveness and flexibility on enhancing the performance of a compact model (i.e., Student) via transferring the knowledge from a cumbersome model (i.e., Teacher). Another well-known problem in many time series tasks is the considerable domain shift between model development and deployment stages. For instance, due to the difference between subject's genders, ages or data collection sensors, a model trained on one subject (i.e., source domain) might perform poorly on another subject (i.e., target domain). Such domain disparity makes cross-domain model compression even more challenging. Some recent works have already attempted to explore the benefits of applying unsupervised domain adaption (UDA) techniques during compressing cumbersome DL models by knowledge distillation. However, there are some drawbacks in these approaches. For instance, joint training of a teacher with UDA and student with KD would result in unstable loss convergence <cit.>, while the knowledge from teachers trained on source domain only <cit.> is biased and limited. For cross-domain knowledge distillation, a proper teacher should possess the knowledge of both domains. In particular, the generalized knowledge (namely Universal Knowledge) across both domains is more critical in improving student's generalization capability on target domain. However, the aforementioned methods coarsely align teacher's and student's predictions, but neglect to disentangle the domain-shared knowledge (namely Joint Knowledge). Due to the existence of domain shift, introducing source-specific knowledge would result in poor adaptation performance. Fig. <ref> presents an example of our proposed universal and joint knowledge under cross-domain scenario. On the one hand, the universal knowledge across source and target domains as shown in Fig. <ref>(a) is important to improve the generalization capability for the student. On the other hand, the inevitable domain shift makes the distributions of source and target domains overlapped. Suppose that there exists a data-domain discriminator to correctly classify the samples into source or target domain. As depicted in Fig. <ref>(b), if some samples lie around its decision boundary, then these samples most likely possess some domain-shared information (i.e., joint knowledge) which makes the discriminator incapable of correctly identifying their data domain (i.e., source or target). Meanwhile, those samples which can be very confidently identified by the data-domain discriminator tend to possess domain-specific knowledge. Equally treating all samples like conventional KD approaches would be adverse to diminishing domain disparity, leading to poor generalization on target data. It is thus highly motivated to pay more attentions on samples with joint knowledge than samples with domain-specific knowledge for cross-domain knowledge distillation. In this paper, we propose an innovative end-to-end model compression framework to improve student's generalization capability under the cross-domain scenarios. Specifically, we design a feature-domain discriminator to align teacher's and student's feature representations for effectively distilling the universal knowledge. Meanwhile, a data-domain discriminator is developed to prioritize the samples with joint knowledge across two domains. It assists to disentangle teacher's logits by paying more attentions on the samples with joint knowledge. Via an adversarial learning scheme, teacher's universal and joint knowledge can be effectively transferred to the compact student. Our main contributions are summarized as follows. * A novel approach named universal and joint knowledge distillation (UNI-KD) approach is proposed to transfer teacher's universal and joint knowledge, which is an end-to-end framework for cross-domain model compression. Two discriminators (i.e., feature-domain and data-domain discriminators) with an adversarial learning paradigm are designed to distill above two knowledge on feature-level and logit-level, respectively. * We propose to disentangle teacher's logits with a data-domain discriminator by prioritizing the samples with joint knowledge across source and target domains. The joint knowledge could further boost the generalization ability of the compact student on target domain. * Extensive experiments are conducted on four real-world datasets across three different time series classification tasks and the results demonstrate the superiority of our approach over other SOTA benchmarks. § RELATED WORK Knowledge distillation, as one of the most popular model compression techniques, has been widely explored in many applications. Originally, the knowledge from a complex teacher model is formulated as the logits soften by a temperature factor in <cit.>. Then, researchers extend the knowledge to the feature maps as they contain more low-level information than logits. Several works try to minimize the discrepancy between teacher's and student's feature representations via explicitly defining distance metrics, such as L2 <cit.>, attention maps <cit.>, probability distributions <cit.> and inter-channel correlation matrices <cit.>. On the contrary, other researchers exploit the adversarial learning scheme which implicitly forces the student to generate similar feature maps as the teacher <cit.>. However, these approaches cannot be directly applied to cross-domain scenarios as they do not consider the domain shift during the compression. To tackle the domain shift issue, various UDA approaches have been proposed. Generally, these techniques can be categorized into two types, namely discrepancy-based and adversarial learning-based. The former ones intend to minimize some statistical distribution measurements between source and target domains, e.g., maximum mean discrepancy (MMD) <cit.>, the second-order statistics <cit.> or higher-order moment matching (HoMM) <cit.>. Whereas the adversarial learning-based ones attempt to learn domain-invariant representations via a domain discriminator <cit.>. Although above mentioned UDA approaches have been successfully applied to many research areas, they seldom consider model complexity issue during domain adaptation, which is more practical for many time series tasks. Recently, there are some attempts to jointly address model complexity and domain shift problems by integrating UDA techniques with KD for cross-domain model compression. In <cit.>, a framework was proposed to employ the MMD to learn domain-invariant representations for teacher and progressively distill the knowledge to the student on both source and target data. However, their approach would lead to difficulty on student's convergence. MobileDA <cit.> performed the distillation on target domain with the knowledge from a source-only teacher. It leveraged the correlation alignment (CORAL) loss to learn domain-invariant representations for student. Similarly, a framework was proposed to perform adversarial learning and distillation on target domain with the knowledge from a source-only teacher in <cit.>. The teachers in <cit.> and <cit.> are trained on source data only and the knowledge from such teachers is very biased and limited. Unlike them, our method employs a teacher trained on labeled source domain and unlabeled target domain, and we distill not only the universal feature-level knowledge across both domains but also the joint logit-level knowledge shared by both domains via an adversarial learning scheme. § METHODS §.§ Problem Definition For the cross-domain model compression scenario, we first assume that a proper teacher Ψ is pre-trained on source and target domain data with SOTA UDA methods (e.g., DANN <cit.>). Our objective is to improve the generalization capability of the compact student ψ on target domain data. Same as other UDA works, we assume that data come from two domains: source and target. Data from source domain are labeled, 𝒟_src^L = {x_src^i, y_src^i}^N_src_i=1, and data from target domain are collected from a new environment without labels, 𝒟_tgt^U = {x_tgt^i}^N_tgt_i=1. Here, N_src and N_tgt refer to the number of samples in source and target domains, respectively. Let 𝒫(X_src) and 𝒬(X_tgt) be the marginal distributions of two domains. UDA problems assume that 𝒫(X_src) ≠𝒬(X_tgt) but 𝒫(Y_src|X_src) = 𝒬(Y_tgt|X_tgt), indicating that source and target domains have different data distributions but share a same label space. Our proposed UNI-KD is depicted as Fig. <ref>. In order to effectively compress the model for the cross-domain scenario, we formulate teacher's knowledge into two categories: feature-level universal knowledge across two domains and logit-level joint knowledge shared by two domains. These two types of knowledge are complementary to each other. We introduce two discriminators with the adversarial learning scheme to efficiently transfer these two types of knowledge. §.§ Universal Knowledge Distillation For cross-domain KD, we define the generalized knowledge across both domains as the universal knowledge. Such universal knowledge contains the fundamental characteristics existing in both source and target domains. In order to align teacher's and student's feature representations, adversarial learning scheme is utilized as it is capable of enhancing student's robustness <cit.>. Previous works also demonstrate that it could improve student's generalization capability on unseen data <cit.>. Motivated by this, we first design a feature-domain discriminator D_f for transferring the universal feature-level knowledge. D_f is a binary classification network to identify the source of input feature, i.e., whether the input features [f_src, f_tgt] come from Ψ or ψ. We train D_f and student ψ in an adversarial manner. To be specific, in the first step, we fix student ψ and train D_f via loss ℒ_DIS as Eq. (<ref>) shows. A batch of `real' samples (feature maps from teacher with source and target domain samples as input) is forwarded through D_f to calculate the loss log(D_f(f_x^Ψ)). The gradients are calculated with back propagation. Then a batch of `fake' samples (feature maps from student with same inputs) is forwarded through D_f to calculate loss log(1-D_f(f_x^ψ)). The gradients are accumulated to previous gradients from `real' samples. At last, minimizing ℒ_DIS maximizes the probability of correctly classifying the input features as `real' (from teacher) or `fake' (from student). The second step is to fix the D_f and train the student to generate similar feature maps as teacher. By minimizing ℒ_GEN in Eq. (<ref>), the discriminator D_f is expected to be incapable of telling whether the features are from Ψ or ψ. Alternately applying above two steps over all the training samples forces the student to learn similar feature maps as the teacher. ℒ_DIS= -𝔼_x ∼ (𝒟_src,𝒟_tgt)[log(D_f(f_x^Ψ)) + log(1-D_f(f_x^ψ))], ℒ_GEN= 𝔼_x ∼ (𝒟_src,𝒟_tgt)[log(1-D_f(f_x^ψ))]. However, there are some challenges for the above adversarial learning scheme. Firstly, it can only transfer the universal knowledge but neglect the domain disparity between source and target domains, resulting in poor generalization on target domain. Secondly, the optimization of student ψ heavily relies on the accuracy of D_f and the student would be difficult to converge especially in the early training stage. Thus, we introduce three additional losses in the second step of adversarial learning scheme to cope with the above issues in the following section. §.§ Joint Knowledge Distillation Logits from teacher contain more information compared to one-hot labels and thus could be utilized as `dark' knowledge for distillation <cit.>. However, we empirically found that simply combining conventional logits knowledge with feature distillation might lead to performance degradation. Due to the existence of domain shift, knowledge from teacher can be divided into domain-joint knowledge shared by two domains and domain-specific knowledge only existing in a particular domain. Since we feed both source and target domain samples to teacher and student, roughly minimizing their logits distributions would transfer both knowledge, leading to poor transferring performance. Therefore, we intend to transfer the domain-joint knowledge but not domain-specific knowledge to the student. To achieve this, we utilize a data-domain discriminator D_d whose output is a binary probability vector l̂_d = [p_c=0, p_c=1]. The element in above vector represents the probability of the inputs belonging to source domain (c=0) or target domain (c=1). We argue that the samples lying around the distribution boundary of source and target domains in the feature space are more generic than those samples which can be classified with high confidence. In other words, if the data-domain discriminator D_d cannot distinguish certain sample in the feature space, this sample most likely belongs to 𝒫(X_src) ⋂𝒬(X_tgt) and possesses more domain-joint knowledge than others. Mathematically, p_c=0 and p_c=1 should be close to each other for these samples. Thus, we can utilize l̂_d to disentangle teacher's logits and let the student pay more attentions on those low-confidence samples during logits distillation. Specifically, for each sample i, we assign a different weight w_i to adjust its contribution for logits-level knowledge distillation in Eq. (<ref>). w_i= 1 - |p_c=0^i - p_c=1^i|. Then, the loss ℒ_JKD for joint knowledge distillation can be formulated as Eq. (<ref>) where KL represents the Kullback-Lerbler divergence and q^s, q^t ∈ℝ^C are the predictions of student and teacher, respectively. C is the number of classes. Each element q_j in q^s or q^t is the probability of input sample belonging to the j^th class and j ∈{1,...,C}. q_j is a function of temperature factor τ used for smoothing the distribution and can be calculated via Eq. (<ref>), and z_j represents model outputs (i.e., logits) before the softmax layer. ℒ_JKD= τ ^2 *1/N∑_i=0^N w_i * KL ( q^s|| q^t), q_j = exp(z_j/τ)/∑_kexp(z_k/τ). It is foreseeable that the efficacy of disentangling domain-joint and domain-specific knowledge to a large extent depends on the accuracy of D_d. Therefore, we introduce a domain confusion loss ℒ_DC to assist the training of D_d in Eq. (<ref>), where l_d and l̂_d are the data domain labels and predictions of data-domain discriminator D_d, respectively. ℒ_DC = - 𝔼_x ∼ (𝒟_src,𝒟_tgt) [l_d*logl̂_d + . . (1-l_d)*log(1-l̂_d)]. Furthermore, overfitting might occur if we only utilize target domain samples to train the student. To avoid this, we also optimize the student on the source domain via a cross-entropy loss as Eq. (<ref>) shows. ℒ_CE= - 𝔼_(x_src,y_src) ∼𝒟_src∑_c [1_[y_src=c]log q^s_c ]. Then, the final loss for training the student ψ in the second step of adversarial learning paradigm is formulated as below: ℒ= ℒ_GEN + (1-α)*ℒ_DC + α*ℒ_JKD + β * ℒ_CE. Here, we introduce two hyperparameters: α to balance the importance between domain confusion loss and logits KD loss, and β to adjust the contribution of ℒ_CE. For α, intuitively we intend to place more weights on UDA to achieve a good data-domain discriminator first and then gradually increase the importance of JKD as the training process goes on. This strategy can assist to stabilize student's training progress at the early stage. At each training epoch m, the corresponding value of α can be calculated by Eq. (<ref>), where M is the total number of epochs, a and b are the starting and end values of α. In our experimental setting, we set α∈ [0.1,0.9]. The value of α is exponentially increased with training epochs. Meanwhile, we utilize the grid search approach to identify the optimal value of parameter β. α = a* e ^m/Mlogb/a. Algorithm <ref> illustrates the details of our proposed UNI-KD for cross-domain model compression. § EXPERIMENTS §.§ Datasets We evaluate our method with four commonly-used time series classification datasets across three different real-world applications: human activity recognition, sleep stage classification and fault diagnosis. UCI HAR <cit.>: A smartphone is fixed on the waist of 30 experimental subjects and each subject is requested to perform six activities, i.e., walking, walking upstairs, walking downstairs, standing, laying and sitting. The measurements from accelerometer and gyroscope are recorded for identifying each activity. Due to the variability between different subjects, we consider each subject as an independent domain and randomly select five cross-domain scenarios same as <cit.> for evaluation. HHAR <cit.>: In this dataset, each subject conducts six activities, i.e., biking, sitting, standing, walking, walking upstairs and downstairs. Since different brands of smartphones and smart watches are leveraged for data collection, this dataset is considered more challenging than UCI HAR in terms of domain shift. We follow <cit.> and select five cross-domain scenarios for evaluation. FD <cit.>: Total 32 bearings are tested under four different operation conditions for rolling bearing fault diagnosis. The motor current signals are recorded for classifying bearing health status, i.e., healthy, artificial damages (D1) and damages from accelerated lifetime tests (D2). We consider each operation condition as an independent domain and select five cross-domain scenarios for evaluation. SSC <cit.>: Sleep stage classification (SSC) dataset aims to utilize electroencephalography (EEG) signals to identify subject's sleep stage, i.e., wake (W), non-rapid eye movement stage (N1, N2 and N3) and rapid eye movement (REM) stage. Each subject is considered as an independent domain and we select five scenarios for evaluation as previous studies <cit.>. §.§ Experiments Setup For our method, a well-trained teacher is a pre-requisite to perform cross-domain knowledge distillation. We adopt 1D-CNN as the backbone of our teacher and student models since it consistently outperforms other advanced backbones such as 1D residual network (1D-Resnet) and temporal convolutional neural network (TCN) as indicated in <cit.>. We leverage domain-adversarial training of neural networks (DANN) <cit.> approach to train the teacher. The student is a shallow version of teacher which has less filters. See Supplementary for network details of teacher and student. Table <ref> summarizes the model complexity of teacher and student in terms of the number of trainable parameters and the number of floating-point operations (FLOPs). We can see that our compact student is about 15× smaller than its teacher in the aspect of parameters and requires less operations during inference. Furthermore, regarding the evaluation metric, considering the fact that accuracy metric might not be representative for imbalanced dataset, we adopt macro F1-score for all experiments as suggested in <cit.>. For all experiments, we repeat 3 times with different random seeds and report the averaged values. §.§ Effectiveness of Adversarial Distillation Feature-level knowledge from teacher's intermediate layers has already been known as a good extension of logit-based knowledge as DL models are able to learn multiple levels of feature representations <cit.>. Various feature-based knowledge distillation approaches have been proposed in existing works. However, for cross-domain KD scenarios, we argue that adversarial learning could more effectively transfer the universal knowledge from teacher to student. To prove it, we first compare the adversarial feature KD with some commonly-used feature distillation approaches: Fitnet <cit.>, PKT <cit.>, AT <cit.>, IEKD <cit.> and ICKD <cit.>. Please refer to Supplementary for details of each approach. It is worth noting that we adapt above methods to our joint logit-level knowledge distillation. The only difference between these methods and ours is the feature distillation part. We utilize the t-SNE to visualize the learnt feature maps of above feature distillation approaches on HHAR dataset. As depicted in Fig. <ref>, the features learned from our proposed UNI-KD are more concentrated and all classes are well separated without overlapping. These observations demonstrate that the adversarial feature KD scheme could efficiently transfer the universal knowledge to the student for the cross-domain scenario. More t-SNE visualization results on other three datasets can be found in Supplementary. §.§ Benchmark Results and Discussions We compare our method with SOTA UDA algorithms, including some discrepancy-based and adversarial learning-based approaches as follows: deep domain confusion (DDC) <cit.>, minimum discrepancy for domain adaptation (MDDA) <cit.>, higher-order moment matching (HoMM) <cit.>, convolutional deep domain adaptation for time series data (CoDATS) <cit.>, conditional adversarial domain adaptation (CDAN) <cit.> and decision-boundary iterative refinement training with a teacher (DIRT-T) <cit.>. Note that these UDA methods are directly applied to compact student. Meanwhile, we also include the results of some advanced works which integrate UDA with KD as follows: joint knowledge distillation and unsupervised domain adaptation (JKU) <cit.>, adversarial adaptation with distillation (AAD) <cit.> and MobileDA <cit.>. See Supplementary for the details of benchmark approaches. Besides, the performance of teacher and student trained on source domain (“Student src-only") are also reported as they can be considered as the upper and lower limits of the compact student. Tables <ref> and <ref> summarize the evaluation results over different domain adaptation scenarios on four datasets. More experimental results on additional transfer scenarios can be found in Supplementary. From Tables <ref> and <ref>, some observations can be found. Firstly, directly applying UDA on a compact student, either the discrepancy-based or adversarial-based UDA approaches, could somehow boost the student's performance on target domain in most of cross-domain scenarios as expected. However, in certain transfer scenarios, negative transfer might also occur. For instance, students trained with the DDC method on 2→7 for HHAR and DIRT-T on 0→11 for SSC even achieve lower performance than the student trained on source only, indicating that those UDA methods might suffer from the inconsistency problem on different domain adaptation tasks. Secondly, the JKU method performs worse than AAD and MobileDA in most of transfer scenarios. The possible reason is that the teacher in JKU is trained together with the student, and it would lead to convergence problem for the student when progressively distilling the knowledge. Moreover, since the teachers used in AAD and MobileDA are only trained on source domain data, the knowledge from these teachers is very limited and biased to source domain, resulting performance degradation. Thirdly, those jointly optimizing KD and UDA methods even under-perform UDA methods like DIRT-T in most transfer scenarios, indicating that improperly transferring teacher's knowledge might decrease student's generalization capability on target data. Lastly, our method consistently performs the best in terms of averaged macro F1-score over all the four datasets and outperforms other benchmarks on most of transfer scenarios. Moreover, compared with other joint KD and UDA methods, our UNI-KD can significantly reduce the performance gaps between teacher and student with the proposed universal and joint knowledge. This observation demonstrates the effectiveness of our method on the cross-domain model compression scenario. Via the evaluation on various time series domain adaptation tasks, our method can robustly compress the DL models with competitive performance as the complex teacher. §.§ Ablation Study There are three key components in our proposed approach: feature-domain discriminator D_f, data-domain discriminator D_d and joint knowledge distillation (JKD). To analyze the contribution of each component, we conduct the ablation study as Table <ref> shows. Moreover, to validate the effectiveness of proposed JKD, we also include the standard KD (SKD) in Table <ref>. Some conclusions can be observed from Table <ref> . First, applying universal feature-level KD via integrating D_f upon D_d could consistently improve student's performance over all datasets. However, integrating JKD upon D_d unexpectedly causes performance degradation in HHAR and SSC compared with only employing D_d. The possible reason is that logits contain less information than feature maps. Aligning teacher's and student's features could assist the student to learn more general representations than logits. Moreover, our UNI-KD suggests that these two types of knowledge are complementary to each other, and combining them can yield better performance as the last row shows. Furthermore, from the last two rows in Table <ref>, we can conclude that compared with standard KD, our proposed JKD is more effective in the cross-domain scenario. §.§ Sensitivity Analysis There are two hyperparameters (i.e., α and β) in our proposed approach as shown in Eq. (<ref>). For α, we propose to gradually increase the importance of JKD loss during the training process as our method relies on an accurate data-domain discriminator. To validate its effectiveness, we compared our adaptive α method with fixed α values as illustrated in Fig. <ref>. We can see that the proposed adaptive α could consistently achieve better results than fixed α. For the hyperparameter β, we utilize the grid search approach to identify the optimal values for different datasets. Fig. <ref> illustrates the performance under different values of β. We can see that higher value of β will result in over-fitting to source data and would decrease the performance as expected. The optimal values for β is around [0.5, 1.0]. In all our experiments, we set β= 0.5 for dataset UCI HAR and HHAR and β= 1.0 for dataset FD and SSC. § CONCLUSION In this paper, we propose an end-to-end framework for cross-domain knowledge distillation. Our method utilizes an adversarial learning paradigm with a feature-domain discriminator and a data-domain discriminator to improve student's generalization capability on target domain. With our proposed approach, the universal knowledge across both domains and the joint knowledge shared by both domains from a pre-trained teacher can be effectively transferred to a compact student. The experimental results show that the proposed UNI-KD can not only reduce the model complexity but also address domain shift issue. § ACKNOWLEDGMENTS This work is supported by the Agency for Science, Technology and Research (A*STAR) Singapore under its NRF AME Young Individual Research Grant (Grant No. A2084c1067) and A*STAR AME Programmatic Funds (Grant No. A20H6b0151). named
http://arxiv.org/abs/2307.00656v1
20230702202115
Scale of Dirac leptogenesis and left-right symmetry in the light of recent PTA results
[ "Basabendu Barman", "Debasish Borah", "Suruj Jyoti Das", "Indrajit Saha" ]
hep-ph
[ "hep-ph", "astro-ph.CO" ]
=1 [email protected] of Theoretical Physics, Faculty of Physics, University of Warsaw, ul. Pasteura 5, 02-093 Warsaw, [email protected] of Physics, Indian Institute of Technology Guwahati, Assam 781039, [email protected] of Physics, Indian Institute of Technology Guwahati, Assam 781039, [email protected] of Physics, Indian Institute of Technology Guwahati, Assam 781039, India Motivated by the recent release of new results from five different pulsar timing array (PTA) experiments claiming to have found compelling evidence for primordial gravitational waves (GW) at nano-Hz frequencies, we study the consequences for two popular beyond the Standard Model (SM) framework, where such nano-Hz GW can arise due to annihilating domain walls (DW). Minimal framework of Dirac leptogenesis, as well as left-right symmetric model (LRSM) can lead to formation of DW due to spontaneous breaking of Z_2 symmetry. Considering the NANOGrav 15 yr data, we show that the scale of Dirac leptogenesis should be above 10^7 GeV for conservative choices of Dirac Yukawa couplings with fine-tuning at the level of the SM. The scale of minimal LRSM is found to be more constrained M_ LR∼ 10^6 GeV in order to fit the NANOGrav 15 yr data. Scale of Dirac leptogenesis and left-right symmetry in the light of recent PTA results Indrajit Saha August 1, 2023 ====================================================================================== Introduction: Recently, four different pulsar timing array (PTA) experiments namely NANOGrav <cit.>, European Pulsar Timing Array (EPTA) together with the first data release from Indian Pulsar Timing Array (InPTA) <cit.>, PPTA <cit.>, all part of the consortium called International Pulsar Timing Array (IPTA) have released their latest findings hinting at a significant evidence for stochastic gravitational waves (GW) background at nano-Hz frequencies. Similar evidence with larger statistical significance has also been reported by the Chinese Pulsar Timing Array (CPTA) collaboration <cit.>. While such a signal can be generated by supermassive black hole binary (SMBHB) mergers though with a mild tension, presence of exotic new physics alone or together with SMBHB can make the fit better <cit.>. Several follow-up papers have also studied the possible origin or implications of this observation from the point of view of dark matter <cit.>, axions or axion-like particles <cit.>, SMBHB <cit.>, first order phase transition <cit.>, primordial black holes <cit.>, primordial magnetic field <cit.>, domain walls <cit.>, inflation <cit.>, cosmic strings <cit.>, astrophysical neutrino oscillation <cit.> and QCD crossover <cit.>. While GW from domain walls (DW) has already been studied as a possible new physics explanation for PTA results <cit.>, we consider the consequence for two popular beyond standard model (BSM) scenarios namely, the minimal Dirac leptogenesis and the left-right symmetric model (LRSM). The first model is a type I seesaw realisation for light Dirac neutrino mass with the heavy vector-like neutral fermions being responsible for generating baryogenesis via leptogenesis <cit.> with light Dirac neutrinos, known as the Dirac leptogenesis scenario <cit.>. GW probe of high scale leptogenesis models have received considerable attention in recent times. In most of these works <cit.>, cosmic string (CS) origin of GW has been studied by considering a U(1)_B-L framework with in-built heavy Majorana fermions responsible for generating Majorana mass of light neutrinos as well as leptogenesis. The scale of leptogenesis or U(1)_B-L breaking scale then decides the amplitude of the CS generated GW spectrum. However, in view of the latest PTA results preferring a positive slope of the GW spectrum, stable CS in such models no longer provide a good fit <cit.>. This raises the prospects for a Dirac leptogenesis model whose minimal version must have a softly broken Z_2 symmetry leading to formation of DW followed by generation of GW due to annihilation or collapse. While a general study related to GW probe of minimal Dirac leptogenesis was carried out in <cit.>, here we consider the implications of recent PTA findings on the scale of Dirac leptogenesis. On the other hand, GW probe of LRSM considering DW as the source have been studied in earlier works <cit.>. DW arise due to spontaneous breaking of parity in such models. While earlier works considered the detection aspects of this model, we now constrain the scale of left-right symmetry considering the latest PTA data. While both the models can explain the latest PTA data, the allowed parameter space remains squeezed to a tiny window, which should face more scrutiny with future data. Domain walls as source of GW: Domain wall is a two-dimensional topological defect arising from spontaneous breaking of discrete symmetries <cit.>. With the expansion of the universe, the energy density of DW falls slower compared to that of radiation or ordinary matter, having the potential to start dominating the energy density of the universe and ruin the successful predictions of standard cosmology. Such a disastrous situation can be prevented if DW are made unstable or diluted or if they have asymmetric initial field fluctuations  <cit.>. In minimal model of Dirac leptogenesis <cit.> as well as left-right symmetric models <cit.>, such DW arises due to the spontaneous breaking of a Z_2 symmetry. If we consider a Z_2-symmetric potential of a scalar field φ, it is straightforward to show the existence of two different vacua ⟨φ⟩ = ± u. It is also possible to find a static solution of the equation of motion given the two vacua to be realized at x →±∞, φ( x) = u tanh( √(λ_φ/2) u x ) , representing a domain wall extended along the x = 0 plane. Here λ_φ is the quartic self-coupling of the scalar field. The DW width is δ∼ m_φ^-1 = (√(2λ_φ) u)^-1. Another key parameter, known as the DW tension is given by σ_w = ∫_-∞^∞ dx ρ_φ = 2√(2)/3 √(λ_φ) u^3 = 2/3 m_φ u^2 ∼ u^3 , where ρ_φ denotes (static) energy density of φ and in the last step, m_φ∼ u is used. Assuming the walls to be formed after inflation, the simplest way to make them disappear is to introduce a small pressure difference <cit.>, a manifestation of a soft Z_2-breaking term. Such a pressure difference or equivalently, a bias term in the potential Δ V needs to be sufficiently large to ensure DW disappearance prior to the epoch of big bang nucleosynthesis (BBN) that is, t_ BBN > t_ dec≈σ_w/Δ V. It is also important to take care of the fact that the DW disappear before dominating the universe, requiring t_dec<t_dom, where t_dom∼ M_P^2/σ_w. Both of these criteria lead to a lower bound on the bias term Δ V. However, Δ V can not be arbitrarily large as it would otherwise prevent the percolation of both the vacua separated by DW. Such decaying DW therefore can emit GW <cit.>. The amplitude the spectrum at peak frequency f_ peak can be estimated as <cit.>[Here we are ignoring the friction effects between the walls and the thermal plasma <cit.>.] Ω_ GWh^2 (t_0) |_ peak ≃ 5.2 × 10^-20 ϵ̃_ gw A_w^4 (10.75/g_*)^1/3 × (σ_w/1 TeV^3 )^4 (1 MeV^4/Δ V)^2 , with t_0 being the present time. Away from the peak, the amplitude varies as Ω_ GW≃Ω_ GW|_ peak×( f_ peak/f) for  f>f_ peak ( f/f_ peak)^3 for  f<f_ peak , where the peak frequency is given by f_ peak (t_0) ≃ 3.99 × 10^-9 Hz A^-1/2 × ( 1 TeV^3/σ_w)^1/2 ( Δ V/1 MeV^4)^1/2 . In the above expressions, A_w is the area parameter <cit.>≃ 0.8 for DW arising from Z_2 breaking, and ϵ̃_ gw is the efficiency parameter ≃ 0.7 <cit.>. Note that the above spectrum can be obtained from a general parametrisation S(f/f_ peak) S(x) = (a+b)^c/(bx^-a/c+ax^b/c)^c for a=3 (required by causality) and b ≈ c ≈ 1 (suggested by simulation <cit.>). However, as noted in <cit.>, the values of b,c may depend upon the specific DW annihilation mechanism or regime all of which have not been explored in numerical simulations yet. This allows one to vary b,c to get a better fit with the PTA data <cit.>. When the GW production is ceased after the annihilation of the domain walls, the energy density of GW redshifts mimicking that of the SM radiation. As a result, GW itself acts as an additional source of radiation with the potential to alter the prediction of BBN. Thus, an excess of the GW energy density around T ≲𝒪(MeV), can be restricted by considering the limits on the number of relativistic degrees of freedom from CMB and BBN, encoded in . This, in turns, puts a bound on the amplitude of GW spectrum, demanding Ω_GW h^2≲ 5.6× 10^-6 <cit.>. Here we consider several projected limits on on top of the existing limit from Planck: N_eff = 2.99 ± 0.34 at 95% CL <cit.>. This bound is shown by the solid gray horizontal line in Fig. <ref>. Once the baryon acoustic oscillation (BAO) data are included, the measurement becomes more stringent: N_eff = 2.99 ± 0.17. A combined BBN+CMB analysis shows N_eff = 2.880 ± 0.144, as computed in Ref. <cit.>. This constraint is denoted by the dashed horizontal line. On the other hand, upcoming CMB experiments like CMB-S4 <cit.> and CMB-HD <cit.> will be able to probe as small as ∼ 0.06 and ∼ 0.027, respectively. These are indicated by dot-dashed and dotted lines respectively. The next generation of satellite missions, such as COrE <cit.> and Euclid <cit.>, leads to ≲ 0.013, as shown by the large dashed line. In Fig. <ref> we summarize theoretical bounds on the VEV u and the bias term Δ V, where all the shaded regions are disallowed from (i) decay of the DWs post BBN (in gray), where t_dec>1 sec, (ii) DW domination (in red) or t_dom<t_dec and (iii) percolation of the vacua separated by DW (in orange), i.e., an arbitrarily large Δ V>u^4. This leaves us with the white region in-between that is allowed, from where we choose our benchmark points (BP), as indicated in Tab. <ref>. The GW spectrum corresponding to the BPs in Tab. <ref> is illustrated in Fig. <ref>. As explained before, we distinctively see a blue-tilted pattern for f<f_peak, while the spectrum is red-tilted in the opposite limit. Here we project limits from BBO <cit.>, LISA <cit.>, DECIGO <cit.>, ET <cit.>, CE <cit.>, THEIA <cit.>, HL (aLIGO) <cit.>, μARES <cit.> and SKA <cit.>. In this plot, the range of GW spectrum from NANOGrav results <cit.> is shown by the red points. The gray-shaded region is completely disallowed from bound on overproduction of Ω_GW as discussed before, depending on the sensitivity of a particular experiment. As one can already notice, BP3 is already ruled out from PLANCK bound, while BP1 is beyond the reach of any future experiments proposed so far. Corresponding to the two epochs and stated before, we define two temperatures and . We are typically interested in the regime >, i.e., the DWs disappear before they dominant the Universe. Following <cit.>, reads ≃ 120 MeV √(Δ V/MeV^4/10^8) (A_w/0.8)^-1/2 (σ_w/GeV^3/10^16)^-1/2 (g_*()/10)^-1/4 , implying, for larger surface tension, it takes longer the walls to collapse, while for larger bias the opposite happens. We also define another quantity r_w = ρ_r/1+ρ_r , where ρ_r = ρ_w()/ρ_R()≃ 0.14 (A_w/0.8)^2 (σ_w/GeV^3/10^16)^2 (10^8/Δ V MeV^4) , which quantifies the energy density contained within the DW compared to that of radiation. We show the compatibility of our relevant model parameters, namely, the bias term Δ V and the VEV u (equivalently, the strain σ) in Fig. <ref> with the NANOGrav data, utilizing Eq. (<ref>) and Eq. (<ref>). We superimpose the 1 and 2-sigma contours (shown by red and blue solid curves) provided by the NANOGrav result <cit.>. We see, BP1 lies well within the 1σ contour, while the other two BPs are well off. It is possible to derive a lower and upper bound on the VEV for a fixed Δ V, as denoted by the gray dashed horizontal lines for Δ V=10^8 MeV^4. Thus, for Δ V=10^8 MeV^4, we find 189≲ u/TeV≲ 225, in compliance with NANOGrav 2σ contour. On the other hand, the viable range of bias term, that lies within 2σ CL of NANOGrav result, turns out to be 4× 10^6≲Δ V/MeV^4≲5× 10^11, as shown by the green dotted and orange dashed curves. Depending on the choice of Δ V, the upper limit on u can be pushed to larger values. Note that, the constraint on VEV mentioned above (together with the limit on Δ V), satisfies the bounds shown in Fig. <ref>. Consequence for Dirac leptogenesis: In the minimal model of Dirac leptogenesis or Dirac neutrino seesaw <cit.>, the standard model (SM) is extended by three copies of vector-like neutral singlet fermions N_L,R and three copies of right chiral part (ν_R) of light Dirac neutrinos. A real singlet scalar field φ is introduced to couple ν_R with N. A Z_2 symmetry under which φ, ν_R are odd, prevents direct coupling of the SM lepton doublet L with ν_R via SM Higgs H. After the neutral components of H and φ acquire VEV v, u respectively, light Dirac neutrino mass arises from the Type-I seesaw equivalent for Dirac neutrino as m_ν∝ y^2 u v/M_N with M_N being the scale of Dirac seesaw and y being Dirac Yukawa couplings of N with ν_L, ν_R considered to be equal. The same heavy fermions N_L,R can have out-of-equilibrium decay to achieve successful Dirac leptogenesis. Now, for u ≳ 190 TeV, preferred from NANOGrav 2023 data as discussed before, and considering light neutrino mass m_ν≤ 0.1 eV, we get M_N > y^2 × 10^17 GeV. This implies that, for order one Yukawa couplings, the scale of Dirac leptogenesis is even above the upper limit on reheating temperature, disfavouring the possibility of thermal Dirac leptogenesis. If the Yukawa couplings are made as low as electron Yukawa coupling, we have M_N > 10^7 GeV, keeping it at intermediate scale. To summarise, the possibility of low scale Dirac leptogenesis is disfavoured unless we tune Yukawa couplings involved in Dirac seesaw more than what we have in the SM. Consequence for left-right symmetry: Left-right symmetric models <cit.> have been one of the most well-studied BSM frameworks where the SM gauge symmetry is extended to SU(3)_c× SU(2)_L× SU(2)_R× U(1)_B-L. In addition to the gauge symmetry, the model also has an in-built Z_2 symmetry or parity symmetry under which the left and right sector fields are interchanged, keeping the framework parity or left-right symmetric. While a detailed discussion related to GW probe of LRSM can be found in <cit.>, here we briefly summarise the implications of 2023 PTA data on the scale of left-right symmetry. Unlike in the Z_2-odd scalar singlet model discussed above, the LRSM gauge symmetry does not allow arbitrary bias terms. It is possible to generate such bias term via higher dimensional operators invariant under gauge symmetry but explicitly breaking the parity symmetry. While Planck scale effects are expected to break any global symmetries like parity <cit.>, the corresponding bias term can lead to DW disappearance <cit.>. As shown in <cit.>, the bias term in the minimal model is Δ V ∼ u^6/M_P^2, where u is the SU(2)_R × U(1)_B-L as well as parity breaking scale. Due to the dependence of the bias term on the scale of left-right breaking, the constraint on left-right symmetry breaking scale is stronger than what we had on the scale of Dirac leptogenesis. As shown in Fig. <ref>, the scale of left-right symmetry should be approximately around ∼ 10^6 GeV (shown by the black solid horizontal line), in order to be in agreement with NANOGrav 15 yr data at 2σ. Conclusion: We have investigated the consequence of the recent PTA results on the scale of Dirac leptogenesis and left-right symmetric model. In minimal version of both these scenarios, domain walls arise due to spontaneous breaking of a discrete Z_2 symmetry. While the bound on the scale of Dirac leptogenesis from PTA data depend upon the size of Dirac Yukawa couplings, for conservative choice of such couplings with fine-tuning at the level of the SM, we find a lower bound M_N > 10^7 GeV, keeping leptogenesis at intermediate scales. However, for order one Yukawa coupling, this bound is much stronger M_N > 10^17 GeV keeping only non-thermal Dirac leptogenesis option viable. Due to the constrained structure of the LRSM, we get much tighter constraint on the scale of left-right breaking namely M_ LR∼ 10^6 GeV, in order to satisfy NANOGrav 15 yr data, keeping the model out of reach from direct search experiments like colliders. Future data from PTA or other GW experiments are expected to shed more light on the parameter space of this model, by constraining the spectrum at higher frequencies. The work of D.B. is supported by the science and engineering research board (SERB), Government of India grant MTR/2022/000575. apsrev
http://arxiv.org/abs/2307.02964v1
20230706130215
On Vafa-Witten equations over Kaehler manifolds
[ "Xuemiao Chen" ]
math.DG
[ "math.DG", "math.AG" ]
Vafa-Witten equations]On Vafa-Witten equations over Kähler manifolds Chen]Xuemiao Chen In this paper, we study the analytic properties of solutions to the Vafa-Witten equation over a compact Kähler manifold. Simple obstructions to the existence of nontrivial solutions are identified. The gauge theoretical compactness for the ^* invariant locus of the moduli space is shown to behave similarly as the Hermitian-Yang-Mills connections. More generally, this holds for solutions with uniformly bounded spectral covers such as nilpotent solutions. When spectral covers are unbounded, we manage to take limits of the renormalized Higgs fields which are intrinsically characterized by the convergence of the associated spectral covers. This gives a simpler proof for Taubes' results on rank two solutions over Kähler surfaces together with a new complex geometric interpretation. The moduli space of (2) monopoles and some related examples are also discussed in the final section. Department of pure mathematics, University of Waterloo, Ontario, Canada, N2L [email protected] [ [ ===== empty amsplain § INTRODUCTION The Vafa-Witten equation is on a pair (A, ϕ) where A is a unitary connection on a unitary bundle (E,H) over a compact Kähler manifold (X,ω) and ϕ is a smooth section of (E) ⊗ K_X satisfying (1) F_A^0,2=0; (2) √(-1)Λ_ω F_A+[ϕ;ϕ]=μ𝕀; (3) _A ϕ=0. Here K_X denotes the canonical line bundle of X and locally if we write ϕ=M σ for some smooth n-form σ, then [ϕ;ϕ]=[M,M^*] |σ|^2. In literature, the Vafa-Witten equation is a special case of the Higgs equations (<cit.>). We call it the Vafa-Witten equation due to that when _ X=4 it is the reduction of the Vafa-Witten equation over Riemannian four manifold (<cit.>). Here we mention a few important and classical special cases. When ϕ=0, the solutions are the Hermitian-Yang-Mills connections which correspond to slope polystable vector bundles by the celebrated Donadlson-Uhlenbeck-Yau theorem (<cit.> <cit.>). When _ X=2 i.e. over the Riemann surfaces, this corresponds to the Hitchin equation and the space of solutions correspond to the space of polystable Higgs bundles (<cit.>). In general, the solutions of the Vafa-Witten equation correspond to the slope polystable Higgs bundle (, ϕ) (<cit.> <cit.>). Now we recall some background and motivations. The Vafa-Witten equation is originally proposed and studied by Vafa and Witten as a search of evidence for S-Duality (<cit.>). The solutions are given by pairs (A,a) where A is a unitary connection on a principal (2)-bundle P over a Riemannian four manifold (X,g) and a is a section of Λ^+ ⊗_P that satisfy * d_Aa=0; * F_A^+=1/2[a;a] [The original version proposed by Vafa and Witten involves one more term and we refer the readers to <cit.> for related discussions.]. When (X,g) is Kähler, by writing a=ϕ+ϕ^*, the equation is reduced as the form above. Assuming uniform L^2 bounds on Higgs fields, it is known in various situations (<cit.> <cit.> <cit.>) that the Uhlenbeck compactness could apply. A very intriguing feature in general is that there does not exist such a priori L^2 norm bound. For this, the sequential compactness for the solutions over general Riemannian four manifolds has been considered by Taubes (<cit.>), where he managed to take the limit of the section part by looking at the renormalized section a/a_L^2 and produce a notion of _2 harmonic 2-form when a_L^2 becomes unbounded in a sequence. One of the main motivations for this paper is to understand such phenomenon in the complex analytic setting. Recently, by purely algebraic geometric methods, the moduli space of stable Higgs pairs over a projective surface has been used to define the Vafa-Witten invariants by Tanaka and Thomas (<cit.>). The moduli space admits a natural ^* action which comes from rescaling the Higgs fields. The key ingredient in this approach is the use of the ^* invariant locus of the moduli space. The corresponding solutions with nonzero Higgs fields in this case are called monopoles and when the Higgs field vanishes this is the classical instantons or Hermitian-Yang-Mills connections. A simple linear algebra lemma shows the ^* invariant locus lies in the so-called nilpotent cone. An observation made in this paper is that even over general Kähler manifolds, one can control the curvature for the nilpotent solutions to the Vafa-Witten equations. Thus the known compactness results about Hermitian-Yang-Mills connection could apply. More generally, similar statements holds when we have uniform control of the associated spectral covers. Furthermore, the moduli space of rank two monopoles are very simple which is actually compact and it's disjoint from the compactified moduli space of Hermitian-Yang-Mills connections. This could be potentially used to study the Vafa-Witten invariants over Kähler surfaces using gauge theory and give an analytic interpretation of such over projective surfaces, while over general Kähler manifolds, this might add more interesting phenomenon to Hermitian-Yang-Mills connections. §.§ Main results Now we briefly sketch the results proved in this paper. To better explain the relation with Taubes' notion of _2 harmonic two forms over Kähler surfaces, we gradually increase the generality of discussions. We first notice some consequences of the Weizenböck formula by using an auxiliary Hermitian-Yang-Mills metric H_K_X on K_X instead of the naturally induced metric from the Kähler metric (see Section <ref>). This gives the following restrictions on the base manifold. Suppose there exists a solution (A,ϕ) with ϕ≠ 0 to the Vafa-Witten equation. Then (K_X) ≥ 0 where if X=0, [ϕ;ϕ]=0, ∇ϕ=0 and c_1(X)=0 i.e. X is a Calabi-Yau manifold [The Kähler metric ω is not necessarily a Calabi-Yau metric]. Here ∇ denote the connection on End(E)⊗ K_X induced by A and the Chern connection on K_X given by H_K_X. In particular, this gives Over a Calabi-Yau manifold, the solutions (A,ϕ) to the Vafa-Witten equation consist of a Hermitian-Yang-Mills connection A and a Higgs field ϕ which locally can be written as ϕ=M ν where ν is a nonzero holomorphic n-form so that * ∇_A M =0, |M|=1; * ∇_H_Kν =0 where ∇_H_K is induced by a flat metric on K_X. In particular, by putting local choices of all such ν together, this gives a notion of _k holomorphic n-form for some k (see Corollary <ref>). Another new observation is about the nilpotent solutions (A,ϕ) to the Vafa-Witten equation. Here ϕ being nilpotent means locally if we write ϕ=Mdz_1 ∧⋯ dz_n, then M^N=0 for some N∈_+. There exists a uniform C^0 bound on the Higgs fields part for nilpotent solutions, in particular monopole solutions, to the Vafa-Witten equations over compact Kähler manifolds. The compactness for the nilpotent solutions to the Vafa-Witten equation behave exactly the same as the Hermitian-Yang-Mills connections. More precisely, given any sequence of nilpotent solutions (A_i, ϕ_i) to the Vafa-Witten equation, passing to a subsequence, up to gauge transforms, (A_i, ϕ_i) converges locally smoothly to (A_∞, ϕ_∞) over X∖ Z where Z={x∈ X: lim_r→ 0lim inf_i∫_B_x(r) |F_A_i|^2 ≥ϵ_0 } is a codimension at least two subvariety of X and ϵ_0 is the regularity constant. Furthermore, A_∞ defines a unique reflexive sheaf _∞ over X and ϕ_∞ extends to be a global section of Hom(_∞, _∞) ⊗ K_X over X; Σ is a codimension at least two subvariety that admits a decomposition Σ=Σ_b ∪ Sing(_∞) where Σ_b denotes the pure codimension two part of Σ. As a sequence of currents (F_A_i∧ F_A_i) →(F_A_∞∧ F_A_∞)+ 8π^2 ∑ m_k Σ_k where m_k ∈_+ and Σ_k denotes the irreducible components of Σ_b. In literature, Σ is usually called the bubbling set, Σ_b is called the blow-up locus and m_k are called the multiplicities of the sequence along Σ_k. More generally, we have Given a sequence of solutions (A_i, ϕ_i) to the Vafa-Witten equation with uniformly bounded spectral covers, the Uhlenbeck compactness applies to the sequence as the nilpotent case in Proposition <ref>. Below we assume that the spectral covers associated to (A_i, ϕ_i) are not uniformly bounded. By passing to a subsequence, this is equivalent to (see Corollary <ref>) lim_i ϕ_i_L^2 = ∞. Denote ρ_i=ϕ_i/ϕ_i_L^2. First, to explain the connection with Taubes' notion of _2 harmonic forms, we assume E is a rank two bundle and consider solutions with trace free Higgs fields. Given a sequence of rank two solutions (A_i,ϕ_i) to the Vafa-Witten equation with ϕ_i=0 and lim_i ϕ_i=∞, there exists a _2 holomorphic n-form (L, ν, Z) associated to the spectral cover defined by b_∞ (see Lemma <ref>) so that there exists a sequence of isometric embeddings σ_i: L|_X∖ Z→(_i)|_X∖ Z satisfying |∇_A_iσ_i| 0, ρ_i - σ_i ν 0, and ∇_A_i(ρ_i - σ_i ν) 0 over any compact subset Y⊂ X∖ Z. When _ X=2 and (E,H) has structure group (2), this has been obtained by Taubes (<cit.>). The results here give a simpler proof of Taubes' results together with a complex geometric interpretation of the limiting data. In general, we give an intrinsic description of the role of the spectral cover by using the natural torsion free sheaves associated to the spectral covers. More precisely, over the total space of the canonical bundle K_X, there exists a tautological line bundle ≅π^* K_X which has a global tautological section τ over K_X. Given any spectral cover π: X_b → X associated to b∈⊕_i=1^r H^0(X, K_X^⊗ i) (see Definition <ref>), we denote ^b=|_X_b. Then π_* ^b is a locally free sheaf of rank equal to r away from the discriminant locus Δ_b and it has a tautological section τ_b=π_*(τ|_X_b). Furthermore, it has a natural Hermitian metric induced by K_X away from Δ_b. Here Δ_b could be the whole set X in general. We denote X_b_∞ as the limit of the spectral cover X_b(ρ_i) where ρ_i = ϕ_i/ϕ_i_L^2. Then in general, we have the following intrinsic form of the convergence of the renormalized Higgs fields away from the discriminant locus Δ_b_∞ of the spectral cover X_b_∞. Assume lim_i ϕ_i=∞. By passing to a subsequence, there exists a sequence of isometric embedings σ_i: π_*^b_∞|_X∖Δ_b_∞→(_i) ⊗ K_X|_X∖Δ_b_∞ so that ∇_i σ_i 0, σ_i τ_b_∞-ρ_i 0, and ∇_i(σ_i τ_b_∞-ρ_i )0 over any fixed compact subset Y⊂ X∖Δ_b_∞. In the last section, we study the space of rank two monopoles with trace free Higgs fields and some examples. It turns out that in general, given a monopole, the connection has to be reducible (See Proposition <ref>). In particular, an (2) monopole is given by a connection on a line bundle together with a section of K_X, thus the moduli space of (2) monopoles is compact. We also give an example that even on the trivial bundle, the moduli space of stable Higgs pairs could be rich. §.§ Notation Give two quantities Q_1 and Q_2 which are usually matrix valued functions, we use * Q_1≲ (≳) Q_2 to denote Q_1 ≤ (≥) C Q_2 for some constant C independent of Q_1 and Q_2 * Q_1 ∼ Q_2 to denote C^-1 Q_2 ≤ Q_1 ≤ C Q_2 for some constant C independent of Q_1 and Q_2 §.§ Acknowledgment The author would like to thank Siqi He for helpful discussions on related topics. This work is partially supported by NSERC and the ECR supplement. § PRELIMINARIES §.§ Linear algebra at one point Let V be a complex vector space of dimension m. Consider the adjoint action (V) ↷(V) and the GIT quotient (V)//(V). As a space, this parametrizes the closures of (V)-orbits in (V) by identifying orbits that have nontrivial intersections. Denote the natural projection by p: (V)/(V) →(V)//(V). The following is well-known and instructive for what type of generic matrices one might consider in general For any y∈(V)//(V), p^-1(y) contains a unique semisimple orbit i.e. the orbit contains a diagonalizable representative, and a unique regular orbit i.e. the orbit contains a representative so that each eigenvalue has exactly one Jordan block. Furthermore, (V)//(V)≅^m and the identification is given by the invariant functions as those b_i(ϕ) satisfying (λ𝕀- ϕ)=λ^m+b_1(ϕ)λ^m-1+⋯ b_m(ϕ). We also need a well known simple linear algebra lemma which we include a proof for completeness The following holds for any m× m matrices * |M|^2 ∼ |[M,M^*]|+|b_1(M)|^2 + ⋯ |b_m(M)|^2/m; * 0≤ |M|^2-∑_i |λ_i|^2≲ |[M,M^*]| where {λ_i}_i denote the set of eigenvalues counted with multiplicities. For (1), we first show |M|^2 ≲ |[M,M^*]|+|b_1(M)|^2 + ⋯ |b_m(M)|^2/m. Otherwise, there exists a sequence of matrices M_k with |M_k|=1, but |[M_k,M_k^*]|+|b_1(M_k)|^2 + ⋯ |b_n(M_k)|^2/n≤1/k. Passing to a subsequence, we can assume M_k → M with |M|=1. Also, we know |[M,M^*]|+|b_1(M)|^2 + ⋯ |b_m(M)|^2/m=0. which implies M=0. This is a contradiction. The other direction is trivial. For (2), we only need to show the second inequality. Otherwise, there exists a sequence of matrices M_k so that k|[M_k, M_k^*]|≤ |M_k|^2-∑_i |λ_i^k|^2. By doing unitary transforms, we can assume M_k are all upper triangular. Write M_k=D_k+U_k where D_k is diagonal and U_k is strictly upper triangular. By normalizing properly, we assume |U_k|=1. Then |[U_k,U_k^*]+[D_k, U_k^*]+[D_k^*,U_k]|=|[M_k, M_k^*]| ≤1/k. Since [D_k, U_k^*]+[D_k^*,U_k]=[D_k, U_k^*]-[D_k,U_k]^* is skew symmetric and [U_k,U_k^*] is symmetric, they are orthogonal. Thus the above implies |[U_k,U_k^*]|^2 ≤ |[U_k,U_k^*]+[D_k, U_k^*]+[D_k^*,U_k]|^2 ≤1/k^2. Passing to a subsequence, we can assume U_k → U which is strictly upper triangular, |U|=1 and [U,U^*]=0. This is a contradiction. §.§ Stability We will use the following standard notion of stability for a Higgs pair (, ϕ) where is a holomorphic vector bundle over a Kähler manifold (X,ω) together with a holomorphic section ϕ∈ H^0(X, () ⊗ K_X). (, ϕ) is called stable (semistable) if for any subsheaf ⊂ with ϕ() ⊂⊗ K_X and 0< <, the following holds μ()=∫_X c_1() ∧ω^n-1/< μ(). Given a Higgs pair (, ϕ), the spectral cover X_b(ϕ) associated to (, ϕ) is defined as X_b(ϕ)=Ψ^-1(0) ⊂ K_X where Ψ: K_S → K_S^⊗ r, λ↦(λ𝕀_-ϕ)=t^r+b_r-1(ϕ) t^r-1 + ⋯ b_r(ϕ). and b(ϕ)=(b_1(ϕ), ⋯ b_r(ϕ)). Here r=. The natural projection π: X_ϕ→ X is a covering branched along the discriminant locus Δ_b(ϕ)=(P(b_1(ϕ),⋯ b_r(ϕ))=0) ⊂ X where P denotes the discriminant polynomial for t^r+b_r-1 t^r-1 + ⋯ b_r=0 and P(b_1(ϕ),⋯ b_n(ϕ))∈ H^0(X, K_X^⊗ r). For purpose later, we choose b(ϕ) to index the spectral cover X_b(ϕ) because it only needs data from b(ϕ) rather than all the information about ϕ. Indeed, given any b∈⊕_k=1^r H^0(X, K_X^⊕ k), we could define X_b={λ∈ K_X: λ^r + b_1 λ^r-1 + ⋯ b_r=0} as above which is a covering of X branched along the discriminant locus Δ_b similar as above. We will still call X_b the spectral cover associated to b. Assuming b_1=⋯ b_r-1=0, X_b is the cyclic cover by taking r-root of -b_r in K_X. In the following, we fix a smooth bundle E over X. Denote _Higgs:={(,ϕ): Higgs pairs so that ≅ E smoothly }/∼ where (,ϕ) ∼ (',ϕ') if and only if there exists an isomorphism f so that the following diagram commutes ⊗ K_S ⊗ K_S. ["ϕ", from=1-1, to=1-2] ["f"', from=1-1, to=2-1] ["ϕ'"', from=2-1, to=2-2] ["f", from=1-2, to=2-2] We also denote the subspace of stable Higgs pairs as _Higgs^s ⊂_Higgs. The Hitchin map is defined as h: _Higgs→⊕_k=0^ H^0(S, K_S^⊗ k), (, ϕ) → (b_1(ϕ), ⋯, b_n(ϕ)) and the fiber h^-1(0) is called the nilpotent cone. Now we recall the spectral construction from algebraic geometry. For this, we assume that X is a projective variety. Over a projective manifold X, there exists a natural one-to-one correspondence between the space of Higgs sheaves (, ϕ) where ϕ∈ H^0(X, () ⊗ K_X) and the space of compactly supported coherent sheaves over K_X. In particular, supp()=X_b(ϕ). Given a compactly supported sheaf over K_X, define =π_* which is a π_*_K_X module. On the other hand, we know π_*(_K_X)=⊕_k≥ 0^k K_X^-1. Thus is an _X module together with an action ⊗ K_X^-1→ which determines uniquely. The action can be described explicitly. Given any local section s of over U which by definition is a section of of π^-1(U) and a section σ of K_X^-1, then σ.s = σ(τ) s where τ is the tautological section of π^*K_X over K_X. Now it is a standard result in algebraic geometry that π_* induces an equivalence between the two categories since π is affine. §.§ Vafa-Witten equation Given a smooth unitary bundle (E,H) over a compact Kähler manifold (X,ω), a solution to the Vafa-Witten equation is a pair (A, ϕ) satisfying (1) F_A^2,0=0; (2) √(-1)Λ_ω F_A+[ϕ;ϕ]=μ𝕀; (3) _A ϕ=0, where A is a unitary connection on (E,H) and ϕ is a section of (E) ⊗ K_X. Here locally if we write ϕ=M σ for some holomorphic n-form σ, then [ϕ;ϕ]=[M,M^*] |σ|^2. A solution (A,ϕ) to the Vafa-Witten equation is called irreducible if it can not be written as a direct sum of nontrivial solutions to the Vafa-Witten equation. We denote ^* as the space of irreducible solutions to the Vafa-Witten equation mod gauge equivalence. Here two solutions (A,ϕ) and (A', ϕ') are called equivalent if there exists a smooth isomorphism f:E → E so that f ∘_A ∘ f^-1=_A', f ϕ f^-1 = ϕ'. Now we have (<cit.> <cit.>) There exists a complex analytic isomorphism Φ: ^s_Higgs→^*. Later in this paper, we will study the limiting behavior of a sequence of solutions to the Vafa-Witten equation. For this, we need Given a solution (, ϕ) to the Vafa-Witten equation, the following holds C^-1([ϕ;ϕ]_L^2+1)≤F_A_L^2≤ C([ϕ;ϕ]_L^2+1) for some dimensional constant C. Indeed,we know F_A = Λ_ω F_A.ω + F_A^0. By the Hodge-Riemann property, we know |F_A^0|^2 ∼ -(F_A^0 ∧ F_A^0) ∧ω^n-2/(n-2)!/ω^n/n! In particular, this implies |F_A|^2 ∼ |Λ_ω F_A|^2 -(F_A^0 ∧ F_A^0) ∧ω^n-2/(n-2)!/ω^n/n! ∼ |Λ_ω F_A|^2 - (F_A ∧ F_A) ∧ω^n-2/(n-2)!/ω^n/n! ∼ |[ϕ;ϕ]|^2-(F_A ∧ F_A) ∧ω^n-2/(n-2)!/ω^n/n!. The conclusion follows. Given a sequence of solutions (A_i, ϕ_i) to the Vafa-Witten equation on a fixed unitary bundle, F_A_i_L^2 is uniformly bounded if and only if [ϕ_i;ϕ_i]_L^2 is uniformly bounded. In general, it is expected that such a bound does not exist globally (<cit.>). This makes the compactification problem for the Vafa-Witten equation more subtle than the Hermitian-Yang-Mills case in general. §.§ Monopoles We start with the following definition A Higgs pair (, ϕ) is called ^* invariant if for any t∈^*, there exists an isomorphism f: → so that the following diagram commutes ⊗ K_S ⊗ K_S ["ϕ", from=1-1, to=1-2] ["f"', from=1-1, to=2-1] ["tϕ"', from=2-1, to=2-2] ["f", from=1-2, to=2-2] which is equivalent to that (,ϕ) defines a fixed point in _Higgs under the ^* action. We need the following simple observation Given a ^* invariant Higgs pair (, ϕ), it must be nilpotent. By definition, for any t, there exists some isomorphism f so that f^-1∘ tϕ∘ f = ϕ. which implies for any k≥ 1 b_k(ϕ)=b_k(tϕ) for any t∈^*. For a fixed point x∈ X, since b_i(tϕ)(x) is a polynomial in t and b_k(0)(x)=0, we must have b_k(ϕ)≡ 0 for any k≥ 1. More generally, by repeating the argument of <cit.>, we could prove A Higgs pair (, ϕ) is ^* invariant if and only if there exists a decomposition =⊕_i,j_i,j where _i,j are holomorphic sub-bundles of so that ϕ: _i,j→_i-1, j+1⊗ K_X. Given ^* invariant Higgs pair (, ϕ), we can choose t∈^* which is not a root of unity. Then by definition, there exists an isomorphism f of so that f∘ϕ= t ϕ∘ f. Since b_k(f) is a constant, we know the eigenvalues of f are constants. For any eigenvalue λ of f, we denote _λ=((λ𝕀-f)^). Since f∘ϕ= t ϕ∘ f, ϕ maps _λ to _tλ. Thus we can further wirte =⊕_i=1^s (_λ_1⊕⋯_t^n_1λ_1) where * _t^iλ_j=((t^iλ_j𝕀-f)^), t^-1λ_j and t^n_j+1λ_j are not eigenvalues for f; * t^i λ_j ≠ t^i'λ_j' for (i,j) ≠ (i', j'); * ϕ maps _t^iλ_j to _t^i+1λ_j. Now one can index the decomposition appropriately to get the desired decomposition. For the other direction, given such ϕ, for any t, we can construct f by scaling each _i,j properly. There are two important classes of solutions to the Vafa-Witten equations. The first one is when ϕ=0, it recovers the Hermitian-Yang-Mills connections and also referred as instantons when _ X =4. The second type is the so-called monopoles A solution (A, ϕ) to the Vafa-Witten equation is called a monopole if ((E, _A), ϕ) is ^* invariant and ϕ≠ 0. We also call the corresponding Higgs bundle (, ϕ) a monopole. We note the following Suppose (A, ϕ) is a monopole. For any 1≠ t∈^*, there exists a non-scaling isomorphism f: (E, _A) → (E, _A) so that tϕ∘ f_t=f_t ∘ϕ and ∇_A f_t =0. By definition, for any t, there exists an isomorphism f_t: (E, _A) → (E, _A) so that f_t^-1∘ tϕ∘ f_t = ϕ. Since ϕ≠ 0, we know f_t is not a scaling for any t≠ 1. Now (f_t^*A, f_t^*ϕ)=(f^*A, tϕ) gives another solution to the Vafa-Witten equation corresponding to the Higgs pair (, tϕ). Note for any t∈ S^1, [tϕ;tϕ]=[ϕ;ϕ], thus we have Λ_ω (F_f_t^*A-F_A)=0 for any t∈ S^1. This is equivalent to Λ_ω(_A (f_t^-1∂_A f_t))=0 and we can simplify it as Λ_ω(_A(∂_A f_t))=0 which by Kähler identify implies ∂_A^* ∂_A f_t=0 thus ∂_A f_t=0 for any t∈ S^1. Combined with f_t being holomorphic i.e. _A f_t=0, this implies f_t is parallel. § THE WEIZENBÖCK FORMULA WITH CONSEQUENCES Below we fix a solution (A, ϕ) to the Vafa-Witten equation over a compact Kähler manifold (X,ω). Instead of using the naturally induced connection on K_X from the Kähler metric, we fix a Hermitian-Yang-Mills metric H_0 on K_X i.e. √(-1)Λ_ωF_H_0=2π(K_X) and use the associated Chern connection ∇_H_K on K_X. Then we denote by B the Chern connection on (E) ⊗ K_S induced by A and ∇_H_K. The norm of the section for ϕ will be computed by H_K except [ϕ;ϕ] which is already defined by using the Kähler metric. The following holds 1/2∇_B^* ∇_Bϕ + [[ϕ; ϕ], ϕ]-2π (K_X). ϕ=0. In particular, Δ_ |ϕ|^2 +|[ϕ;ϕ]|^2 +|∇_B ϕ|^2≤ 2π(K_X) |ϕ|^2 and ∇_Bϕ_L^2^2 + [ϕ;ϕ]_L^2^2∼ 2π(K_X) ϕ^2_L^2. By the Weinzeböck formula (<cit.>), we know 0=_B^*_Bϕ=1/2∇_B^* ∇_Bϕ- [√(-1)Λ F_A, ϕ]-√(-1)Λ F_H_0 .ϕ. By the Vafa-Witten equation, we know 0=_B^*_Bϕ=1/2∇_B^* ∇_Bϕ + 1/2 [[ϕ; ϕ], ϕ]- 2π (K_X).ϕ Now the second inequality follows from a direct computation and the last one follows from integrating the equality. For the last equivalence, the reason why we get ∼ instead of equality is due to that the term ([[ϕ;ϕ],ϕ],ϕ) is computed by using a mix of the Kähler metric and the Hermitian-Yang-Mills metric we put on K_X. We note the following ϕ_C^0≲ϕ_L^2. Indeed, we know from Proposition <ref> that Δ__A|ϕ|^2 ≲ |ϕ|^2. which implies Δ__A|ϕ| ≲ |ϕ| in the weak sense. The conclusion now follows from the Moser iteration. Given a solution (A,ϕ) to the Vafa-Witten equation over a Kähler manifold (X,ω), * suppose ϕ≠ 0, then K_X ≥ 0; * suppose [ϕ;ϕ]≠ 0, then K_X>0. Below we examine the case of K_X=0. Assume K_X=0. Then any nontrivial solution (A,ϕ) to the Vafa-Witten equation must satisfy [ϕ;ϕ]=0, ∇_B ϕ = 0. where if ϕ≠ 0, then K_X^⊗ k is trivial for some 0<k≤. In particular, X must be a Calabi-Yau manifold. The first part follows directly from the Weizenböck formula above. Since [ϕ;ϕ]=0, we know ϕ can be diagonalized.Thus if ϕ≠ 0, b_k(ϕ) ≠ 0 for some 0<k≤. Since K_X^⊗ k=0, b_k(ϕ) must trivialize K_X^⊗ k, thus c_1(K_X)=0. The conclusion follows. In particular, we have Locally ϕ=M ν * ∇_A M=0; * ∇_H_Kν=0. where ν is a local holomorphic (n,0) form and M is a local section of (). Indeed, since H_X is a flat metric, locally we can always choose a holomorphic (n,0) form ν≠ 0 so that ∇ν = 0. Then we can write ϕ=Mν. Since 0=∇_A (Mν)=∇_A(M) ν + M ∇_H_Kν = (∇_A M)ν we know ∇_A M =0. The conclusion follows. We observe the following Given any matrix M, suppose k is any integer so that b_k(M)≠ 0. Then the condition b_k(e^iθM)∈_+ has exactly k solutions for e^iθ. Indeed, by definition b_k(e^iθM)=e^ikθ b_k(M). Write b_k(M)=a e^iθ_0 where a>0. Now e^ikθ b_k(M) ∈_+ is equivalent to e^i(kθ+θ_0)=1 which forces θ=2π m/k-θ_0/k. Thus we can exactly k solutions to b_k(e^iθM)>0 for e^iθ corresponding to m=0, ⋯ k-1. Given this, we have the following A nontrivial Vafa-Witten solution (A,ϕ) over a Calabi-Yau manifold naturally gives rise to a _k holomorphic n-form for some k, i.e. a global section of K_X over X defined up to a _k action. Indeed, by Proposition <ref>, for any x∈ X, locally over a neighborhood of x, we can write * ∇_A M_x=0, |M_x|=1; * ∇_H_Kν=0. where ν is a local holomorphic (n,0) form and M is a local section of (). Since [ϕ;ϕ]=0, ϕ is diagonalizable. Since ϕ≠ 0, we know b_k(ϕ) ≠ 0 for some k. Now if we put all the local choices together ∪_x (U_x, M_x) by requiring M_x satisfy b_k(M_x)∈^+ which as we already see from Lemma <ref>, different choices differ by _k which gives a natural principal _k bundle P__k. Let L^-1 be the natural flat bundle associated to P_^k and denote L as the dual of L^-1. Since Mν is globally defined as ρ, we know the corresponding choice of local choices ν glued together to be a holomorphic section of L⊗ K_S. Furthermore, there exists an isometry σ: L →() so that σ(ν)=ρ. Now we assume (A_i, ϕ_i) is a sequence of solutions to the Vafa-Witten equations over a Calabi-Yau manifold (X, ω). |ϕ| is a constant. This follows from the simple fact that ϕ_i is parallel. We can normalize ρ_i=ϕ_i/|ϕ_i|. Then we have the following compactness results Passing to a subsequence, up to gauge transforms, (A_i, ρ_i) converges locally smoothly to (A_∞, ρ_∞) over X∖ Z; A_∞ is a Hermitian-Yang-Mills connection and defines a unique reflexive sheaf _∞ over X and ρ_∞ is a parallel section of (_∞, _∞⊗ K_X) and can be extended to be a global section of Hom(_∞, _∞) ⊗ K_S over X; Σ is a codimension at least two subvariety that admits a decomposition Σ=Σ_b ∪ Sing(_∞) where Σ_b denotes the pure codimension two part of Σ. As a sequence of currents (F_A_i∧ F_A_i) →(F_A_∞∧ F_A_∞)+ 8π^2 ∑ m_k Σ_k where m_k ∈_+ and Σ_k denotes the irreducible components of Σ_b. § USEFUL ESTIMATES FROM THE SPECTRAL COVER We will generalize Mochizuki and Simpson's estimates (<cit.> <cit.>) for Higgs bundles over Riemann surfaces to general Kähler manifolds. It is done by closely following Mochizuki's arguments (<cit.>) with adaptions to higher dimensions. See also <cit.> for related discussions. §.§ C^0 estimates on the Higgs fields from the control of spectral cover We work near 0∈ B_R⊂^n where B_R is the ball of radius R centered at the origin endowed with the standard flat metric [We assume this for simplicity of discussion and the estimates can be easily adapted for general Kähler metrics.]. Let (A, ϕ) be a solution to the Vafa-Witten equation defined near a neighborhood of B_R. Write ϕ=M dz_1 ∧⋯ dz_n. Unless specified, all the constants below will only depend on the controlled geometry of the base but not on the pair (A,ϕ). Suppose all the eigenvalues of M are bounded by S over B_R. Then max_B_R/2 |M| ≤ C_0(S+1) for some constant C_0. A direct computation shows Δ_log |M|^2≤([√(-1)Λ_ω F_A,M], M)/|M|^2=-|[M,M^*]|^2/|M|^2. By Lemma <ref>, we know |[M,M^*]| ≥ C(|M|^2-∑ |λ_i|^2)≥ 0 for some dimensional constant C>0 where {λ_i}_i denote the set of eigenvalues of M counted with multiplicities. Thus Δ_log |M|^2 ≤ -C^2 (|M|^2-∑ |λ_i|^2)^2/|M|^2. Assume |M|^2 > 2 ∑ |λ_i|^2 at a point x, then Δ_log |M|^2 ≤ - C^2/4 |M|^2. Now we consider the auxiliary function logC'/(R^2-|z|^2)^2 for some constant C' to be determined later. A direct computation shows Δ_log(C'/(R^2-|z|^2)^2)= -2n R^2/(R^2-|z|^2)^2+2(n-1)|z|^2/(R^2-|z|^2)^2≥ -2nR^2/C'C'/(R^2-|z|^2)^2. In particular, we have at the points where |M|^2 ≥ 2 ∑_i |λ_i|^2 Δ_ [log|M|^2- log(C'/(R^2-|z|^2)^2)] ≤ -C^2/4[|M|^2-C'/(R^2-|z|^2)^2] where C' is chosen so that 2nR^2/C'≤C^2/4 and C' ≥ 2R^4 r S^2. Let U={z∈ B_R: |M|^2 > C'/(R^2-|z|^2)^2} which is a precompact open set in B_R. Suppose it is not empty. Then for any z∈ U, we have |M|^2> C'/R^4≥ 2r S^2>2 ∑_i |λ_i|^2 which implies the following over the open set U Δ_ [log|M|^2- log(C'/(R^2-|z|^2)^2)] < 0. Since U is relatively compact in B_R, we know [log|M|^2- log(C'/(R^2-|z|^2)^2)]|_∂ U=0 and by maximum principle, the following holds over U log|M|^2- log(C'/(R^2-|z|^2)^2) ≤ 0 which is a contradiction. Thus U is empty. In particular, max_B_R/2 |M| ≤2√(C')/R^2 for any C' so that C' ≥8nR^2/C^2 and C'≥ 2R^4 r S^2. The conclusion follows. In particular, this gives |Λ_ω F_A| ≤ C(S^2+1+μ) for some constant C. §.§ Estimates from the splitting associated to the spectral data Below we assume (, ϕ)=⊕_λ∈Λ_λ over a neighborhood of B_R so that * M=⊕_λ M_λ where λ is a holomorphic function and M_λ: _λ→_λ has λ as its eigenvalue with multiplicity equal to (_λ); * λ≠λ' for any λ, λ' ∈Λ; Denote * S:=max_λsup_B_R|λ|; * δ=max_λ≠λ'sup_B_R |λ-λ'| Then we assume Assumption: δ≥ 1 and S ≤ C δ. For each λ, denote the holomorphic projection to _λ through the splitting as p_λ: →_λ and the orthogonal projection to _λ induced by the metric H as π_λ: →_λ. We first note There exists a constant C_1>0 so that |p_λ|<C_1. In particular, |p_λ-π_λ|<C_1. This follows exactly the same as <cit.> which is essentially a computation at a fixed point z. For any λ fixed at the point z, pick a complex number α so that |α -λ| = δ/100 Let γ_α be a circle centered around α with radius equal to δ/100. By Cauchy's integral formula we know p_λ=1/2π√(-1)∫_γ_α (ζ𝕀_-M)^-1 dζ. by using the simple fact that (ζ𝕀_-M)^-1=⊕_λ' (ζ𝕀__λ'-M|__λ')^-1. Now the conclusion follows from that along the circle γ_α |(ζ𝕀_-M)^-1| ≤ C' (2 δ)^-r. (δ+1)^r-1≤ C' for some suitable choice of constant C'. Here we used the simple fact that for a given invertible matrix D |D^-1|≤ C”max_γ |γ|^-r|D|^r-1 for some constant C” independent of D and γ is among all the eigenvalues of D. We need the following For any holomorphic section N of (, ) with [M,N]=0, the following holds Δ_log |N|^2 ≤ -|[M^*,N|/|N|^2. The statement follows from a direct computation as Δ_log |N|^2 ≤([√(-1)Λ_ω F_A,N], N)/|N|^2 =-([[M,M^*],N],N)/|N|^2 =([[M^*,N],M]+[[N,M],M^*], N)/|N|^2 =([[M^*,N],M], N)/|N|^2 =-|[M^*N]|^2/|N|^2. Using this equation, we can improve the estimate on |p_λ-π_λ|. For some constant C_2>0 and ϵ_0>0, the following holds |p_λ-π_λ| ≤ C_2 e^-ϵ_0 δ over B_R/4. We first note |π_λ|^2 +|p_λ-π_λ|^2= |p_λ|^2 where |π_λ|^2 =r_λ:=_λ, thus 1+|p_λ-π_λ|^2/r_λ=|p_λ|^2/r_λ. By Lemma <ref>, since p_λ-π_λ are bounded, this implies (C')^-1|p_λ-π_λ|^2/r_λ≤log|p_λ|^2/r_λ≤ C' |p_λ-π_λ|^2/r_λ so it suffices to control log|p_λ|^2/r_λ. By Lemma <ref>, we know Δ_log|p_λ|^2/r_λ≤ -|[M^*,p_λ]|^2/|p_λ|^2≤ -ϵ' δ^2 log|p_λ|^2/r_λ. where the second inequality follows from a direct computation by using the matrix form under a basis associated to the orthogonal splitting given by p_λ (see also <cit.>). Consider the comparison function e^ϵ”δ |z|^2 which satisfies Δ_ e^ϵ”δ |z|^2≥ -ϵ' δ^2 e^ϵ”δ |z|^2 over B_R/2 for suitable choice ϵ”. Now we choose C” so that [C” - log|p_λ|^2/r_λ]|_∂ B_R/2>0. i.e. C”>min_∂ B_R/2log|p_λ|^2/r_λ Consider U={z∈ B_R/2: C” e^-ϵ”δR^2/4 e^ϵ”δ |z|^2 < log|p_λ|^2/r_λ} which is precompact and open in B_R/2. Over U, we know Δ_(log|p_λ|^2/r_λ-C” e^-ϵ”δR^2/4 e^ϵ”δ |z|^2)< 0. On the other hand, by maximum principle, we know over U log|p_λ|^2/r_λ≤ C” e^-ϵ”δR^2/4 e^ϵ”δ |z|^2 which is a contradiction to the definition of U. Thus U=∅. In particular, over B_R/2 log|p_λ|^2/r_λ≤ C” e^-ϵ”δR^2/4 e^ϵ”δ |z|^2 Thus over B_R/4, we have log|p_λ|^2/r_λ≤ C” e^-ϵ”δR^2/4 e^ϵ”δ |z|^2≤ C” e^-ϵ”δ3R^2/16. The conclusion follows by taking C_2=C” and ϵ_0=ϵ”3R^2/16. There exists some constant C_4 and ϵ_2 >0 so that over B_R/8 ∫_B_R/8 |∂_A p_λ|^2=∫_B_R/8|_A p_λ^*|^2 ≤ C_4 e^-ϵ_2 δ. A direct computation using _A p_λ=0 shows Δ_|p_λ-p_λ^*|^2 =([√(-1)Λ_ω F_A, p_λ], p_λ-p_λ^*)-|∂_A p_λ|^2 Take χ to be any cut-off function so that χ|_B_R/8=1, χ|_B^c_R/4=0, |∇χ| ≤C/R, |∇^2 χ|≤C/R^2. Multiplying the above by χ and doing integration, we know ∫_B_R/8 |∂_A p_λ|^2 ≤ |∫_B_R/4χΔ_|p_λ-p_λ^*|^2| + |∫_B_R/4([√(-1)Λ_ω F_A, p_λ], p_λ-p_λ^*)| ≤ ∫_B_R/4 |p_λ-p_λ^*|^2 |Δ_χ| +∫_B_R/4 |[√(-1)Λ_ω F_A, p_λ]| |p_λ-p_λ^*| Now using π_λ=π_λ^* we know by Proposition <ref> that |p_λ-p_λ^*| ≤ |p_λ-π_λ|+|π_λ^*-p_λ^*| ≤ 2C_2 e^-ϵ_0 δ and we also know from Corollary <ref> that |√(-1)Λ_ω F_A|≤ C(1+S^2+μ). In particular, for suitable choice of ϵ_3, we have ∫_B_R/8 |∂_A p_λ|^2 ≤ C_4 e^-ϵ_3 δ. There exists some constant C_5 and ϵ_3 >0 so that over B_R/8 ∫_B_R/8 |_A π_λ|^2=∫_B_R/8|∂_A π_λ|^2 ≤ C_5 e^-ϵ_3 δ. By Proposition <ref>, it suffices to show |_A π_λ|≤ |_A p_λ^*|. Indeed, we know _A π_λ=_A(π_λ-p_λ). Combined with the orthogonal decomposition -_A(p_λ^*)=_A(p_λ-p_λ^*)=_A(p_λ-π_λ)+_A(π_λ-p_λ^*) this implies the inequality |_A π_λ|≤ |_A p_λ^*|. §.§ Applied to a sequence when the spectral covers become unbounded Later in our discussion, we will apply the estimates to study the convergence of a sequence. For this, we assume (A_i, ϕ_i) is a sequence of solutions to the Vafa-Witten equations over B_2R satisfying * b_k(ρ_i) → b_k^∞ where ρ_i = r_i^-1ϕ_i and r_i →∞ as i→ 0; * Δ(b^∞)=∅ where b^∞=(b_1^∞, ⋯ b^∞_ E). In particular, for i large, ϕ_i are regular and semi-simple over B_R and we can decompose (E, _A_i)=⊕_k ^i_λ_k where _λ_k has rank one and by writing ϕ_i=M^i dz_1 ⋯ dz_n locally M^i acts on _λ_k by scaling with λ_k. Denote π^i_k: (E, _A_i) →^i_λ_k and p^i_k: (E, _A_i) →^i_λ_k as the holomorphic and orthogonal projections respectively. The following holds * lim_i sup_B_R/8 |p^i_k-π^i_k| → 0; * lim_i ∫_B_R/8 |∇_A_iπ^i_k|^2 → 0; * lim_i ∫_B_R/8 |∇_A_i p^i_k|^2 → 0. This directly follows from applying the estimates above to π^i_k and p^i_k. Denote δ^i=max_k≠ k'sup_ B_R|λ^i_k - λ^i_k'| and S^i=max_k≠ k'sup_B_R |λ^i_k|. Since b_k(r_i^-1ϕ_i^k) → b_k^∞ for each k and Δ(b^∞)=∅ over B_2R, we know the spectrum of r_i^-1ϕ_i also converges and we denote r_i^-1λ_k^i →λ_k^∞. Then for any k≠ k' λ_k^∞≠λ_k'^∞ thus one can easily see that δ^i →∞ as i→∞ and S^i ≤ Cδ^i for some C independent of i. Now the conclusion follows from Proposition <ref>, Proposition <ref> and Corollary <ref> applied to the sequence. § UHLENBECK COMPACTNESS FOR SOLUTIONS WITH UNIFORMLY BOUNDED SPECTRAL COVERS In the following, we will always consider a sequence of solutions (A_i, ϕ_i) to the Vafa-Witten equation on a fixed unitary bundle (E, H). We first note the following by a simple argument Assume ϕ_i are all nilpotent. There exists a dimensional constant C so that for any nilpotent solution (A,ϕ) to the Vafa-Witten equation ϕ_C^0≤ C. By Proposition <ref>, we know Δ_ |ϕ|^2 ≤ -|[ϕ;ϕ]|^2+2π(K_S) |ϕ|^2. Since ϕ is nilpotent i.e. b_k(ϕ)≡ 0 for any k, by Lemma <ref>, we know |ϕ|^4 ≲ |[ϕ;ϕ]|^2 thus combined with the above, this implies Δ_ |ϕ|^2 +|ϕ|^4 ≲ 2π(K_S) |ϕ|^2. Suppose |ϕ| achieves its maximum at x_m, then |ϕ|^4(x_m) ≲ 2π(K_S) |ϕ|^2(x_m). which implies |ϕ|(x_m) ≲√(2π K_S). Actually, this follows from the C^0 estimate we derived in Proposition <ref> which gives the following Assume the spectral covers of X_b(ϕ_i) are uniformly bounded, then the Higgs fields ϕ_i are uniformly bounded. In particular, this gives Assume the spectral covers of X_b(ϕ_i) are uniformly bounded, then F_A_i_L^2≤ C and Λ_ω F_A_i_C^0≤ C. Given the estimates above, by Uhlenbeck's compactness results (<cit.> <cit.>), we know Given any sequence of solutions (A_i, ϕ_i) to the Vafa-Witten equation with uniformly bounded spectral covers, passing to a subsequence, up to gauge transforms, (A_i, ϕ_i) converges locally smoothly to (A_∞, ϕ_∞) over X∖ Z where Z={x∈ X: lim_r→ 0lim inf_i∫_B_x(r) |F_A_i|^2 ≥ϵ_0 } is a closed subset of X of Hausdorff codimension at least four. Here ϵ_0 is the regularity constant. In <cit.>, the convergence is in L^p_1, loc for general admissible connections for any fixed p>1. Since in our case the Vafa-Witten equations are elliptic after fixing gauge, the convergence can be improved to be C^∞_loc by standard bootstrapping arguments similar as the Yang-Mills case (<cit.> <cit.>). To finish the proof of Corollary <ref>, we need to show that the bubbing set is complex analytic which follows from similar argument as <cit.> <cit.> and we include a sketched explanation. Z is a codimension at least two subvariety of X. Furthermore, A_∞ defines a unique reflexive sheaf _∞ over X and ϕ_∞ extends to be a global section of Hom(_∞, _∞) ⊗ K_S over X; Σ is a codimension at least two subvariety that admits a decomposition Σ=Σ_b ∪ Sing(_∞) where Σ_b denotes the pure codimension two part of Σ. As a sequence of currents (F_A_i∧ F_A_i) →(F_A_∞∧ F_A_∞)+ 8π^2 ∑ m_k Σ_k where m_k ∈_+ and Σ_k denotes the irreducible components of Σ_b. The argument follows exactly as the Hermitian-Yang-Mills case by Tian (<cit.>) combined with the results by Uhlenbeck <cit.> and Bando-Siu (<cit.>). More precisely, it is shown the monotonicity formula and ϵ regularity holds for a sequence of integrable unitary connections with bounded Hermitian-Yang-Mills/Hermitian-Einstein tensor and L^2 norm of the curvature. With these two key ingredients, one can show Σ is 2n-4 rectifiable. Now except the extension part, the conclusion follows as <cit.> and <cit.>. The fact that A_∞ defines a reflexive sheaf follows from <cit.> and the extension of A_∞ follows from <cit.>. The extension of ϕ_∞ follows from the Hartog's property of sections of reflexive sheaves. § COMPACTNESS FOR RANK TWO SOLUTIONS WITH TRACE FREE HIGGS FIELDS §.§ _2 Holomorphic n-forms We first define the notion of _2 holomorphic n-forms by generalizing Taubes' notion of _2 harmonic 2 forms. A _2 holomorphic n-form is defined as a triple (L, ρ, Z) satisfying * Z is a Hausdorff codimension at least two closed subset of X * L is the natural complex flat line bundle associated to a principal _2 bundle over X∖ Z; * ρ∈ H^0(X∖ Z, L⊗ K_S) and |ρ| is a Hölder continuous function over X; * Z=|ρ|^-1(0). Actually the definition tells more Given a nonzero _2 holomorphic n-form (L, ρ, Z), the following holds * ρ^2 can be extended to be a holomorphic section of K_X over X; * Z is a subvariety of pure codimension one; * the principal _2 bundle associated to L arises as the cyclic covering by taking the square root of ρ^2. Since L is defined by a principal _2 bundle, L^⊗ 2 = _X as a holomorphic line bundle. Thus ρ^⊗ 2 gives a holomorphic section of (L⊗ K_X)^⊗ 2 over X∖ Z. By the extension property of analytic functions (<cit.>), we know ρ^⊗ 2 can be extended to be a global section of K_X^⊗ 2 over X. In particular, we know Z=ρ^-1(0) is either empty or a hypersurface. The last statement follows from definition. In particular, this gives the following nonexistence results Suppose K_X < 0, then there does not exist any nontrivial _2 holomorphic n-forms Otherwise, since ρ^2 gives a nontrivial section of K_X^⊗ 2, (K_X) ≥ 0 which is a contradiction. Fix (A, ϕ) to be a rank two solution of the Vafa-Witten equation with (ϕ)=0. Denote Z=(b_2(ϕ))^-1(0). We have the following There exists natural _2 holomorphic n-forms (L_b(ϕ), ν, Z) associated to the spectral cover X_b(ϕ). Indeed, for any x∈ X∖ Z, we know b_1(ϕ)=0 and b_2(ϕ)=ϕ≠ 0 near x. Thus locally near x, the spectral cover is a trivial 2-1 cover and we can decompose =_ν^x⊕_-ν^x where ν^x is a nonzero holomorphic (n,0) form near x and ϕ acts on L_ν as a scaling by ν^x. Now we can cover X∖ Z with such open sets and get data ∪_x (U_x, ν^x) which can be naturally viewed as a holomorphic section of L⊗ K_X over X∖ Z. We denote the section as ν. Then we know |ν|^2=|b_2(ϕ)| thus |ν|=√(|b_2(ϕ)|) is a Hölder continuous function across Z. In general, given any b_2∈ H^0(X, K_X^⊗ 2), one can take the cyclic covering defined by the square root of b_2 in K_X. The _2 holomorphic n-form associated to X_b is then a choice of union of local sections away from the discriminant locus Δ_b. §.§ Sequential limits In the following, we will consider a sequence of rank two solutions (A_i, ϕ_i) to the Vafa-Witten equation with (ϕ_i)=0 and lim_i ϕ_i_L^2 = ∞. Denote ρ_i=ϕ_i/ϕ_i r_i=ϕ_i, then (A_i, ρ_i) satisfies * F^0,2_A_i=0; * √(-1)Λ_ω F_A_i+r_i^2[ρ_i;ρ_i]=c𝕀; * (ρ_i) =0; * lim r_i =∞ We start with the following simple observation by Proposition <ref>. [ρ_i;ρ_i]_L^2≤1/r_i. By standard elliptic theory, passing to a subsequence, we can always assume b_2(ρ_i) → b_2^∞. We need the following simple observation b_2^∞≠ 0. Indeed, we know |ρ_i|^2 ∼ |[ρ_i;ρ_i]|+|b_1(ρ_i)|^2 + |b_2(ρ_i)|= |[ρ_i;ρ_i]| + |b_2(ρ_i)| thus 1∼∫_X |[ρ_i;ρ_i]|+ ∫_X |b_2(ρ_i)| Since b_2(ρ_i) → b_2^∞ smoothly over X, by taking the limit and using that ∫_X|[ρ_i;ρ_i]| → 0, we know ∫_X |b_2^∞| ≠ 0. The conclusion follows. Given this, we can take the spectral cover X_b^∞ (resp. X_b(ρ_i)) defined by b^∞=(b_1^∞, b_2^∞) (resp. b(ϕ_i)=(b_1(ρ_i), b_2(ϕ_i))). L By passing to a subsequence, X_b(ρ_i) converges naturally to X_b^∞. This follows from the simple fact that b(ρ_i) → b^∞ smoothly. Thus √(b(ρ_i))→√(b^∞) smoothly. Now we are ready to prove the main result in the rank two case Fix any x∈ X ∖ Z, i.e. b_2^∞(x) ≠ 0. Since b_2(ρ_i) → b_2^∞ smoothly, we know for i large, b_2(ρ_i)(x) ≠ 0, thus locally near x, the spectral cover is a trivial 2 to 1 cover and we can decompose _i=_ν^x_i⊕_-ν^x_i where ν^x_i is a nonzero holomorphic (n,0) form near x and ρ_i acts on L_ν as a scaling by ν^x_i. We also know ν^x_i converge to a square root ν^x_∞ of b_2^∞(x). By taking a cover of X∖ Z and making a choice of square root, we get data ∪_x (U_x, ν^x_∞) away from Z which defines a _2 holomorphic n-form (L, ν_∞, Δ(b_∞)) associated to the spectral cover X_b_∞. Now the construction of σ_i is simply by locally defining σ_i^x = π_i+^x-π_i-^x over U_x where π_i±^x denotes the orthogonal projections to _±ν_i^x respectively. Now it remains to prove σ_i satisfies the properties required over X∖ Z. For this, denote p_i±: →_±ν_i to be the local holomorphic projections. We know then σ_i ν_∞-ρ_i=ν_∞π_i+^x-ν_∞π_i-^x-(ν_i p^x_i+-ν_i p^x_i-). By Proposition <ref>, we know locally on a smaller open set |p_i±^x| ≤ C, |π_i±^x| ≤ C, and |π^x_±-p^x_±| 0, |∇_A_iπ^x_i±| 0, |∇_A_i p_i^±| 0 as i→∞ near x. Combined with ν_i →ν_∞, we get the desired properties over any compact set Y ⊂ X∖ Z. Assume _ X =2. This recovers Taubes' results in the Kähler surface case (<cit.>). Here by assuming _X=2, the pair (A, a=ϕ+ϕ^*) satisfies the real version of the Vafa-Witten equation over a Riemannian four manifold * d_Aa=0; * F_A^+=1/2[a;a]. which is considered by Taubes. In particular, this gives an algebraic characterization of the limiting data in Taubes' analytic results over Kähler surfaces. Assume X is a Kähler surface. The _2 harmonic two form obtained by Taubes in <cit.> is determined by the _2 holomorphic 2-form associated to the spectral cover X_b^∞. It suffices to note the spectral cover recovers the _2 principal bundle in the Riemannian setting and by taking the real part the holomorphic (2,0) form ν, it recovers the closed self dual form used in <cit.>. It is very straightforward to generalize the discussion in this section to the case when the spectral cover is cyclic i.e. b_1=⋯ b_r-1=0. § COMPACTNESS IN GENERAL In this section, we will explain the role of the spectral cover in general when studying the limit of the renormalized Higgs fields. The rank two case with trace free Higgs fields is a variant of this by taking into consideration the special symmetry of the spectral cover. Now we assume (A_i, ϕ_i) is a sequence of solutions to the Vafa-Witten equation on a fixed unitary bundle (E,H) over a compact Kähler manifold (X, ω) satisfying lim_i ϕ_i = ∞ and we denote ρ_i = ϕ_i/ϕ_i. Similar as before, By passing to a subsequence, b_k(ρ_i) converges to b_k^∞∈ H^0(X, K_X^⊗ k) smoothly over X for any k. Furthermore, b_k^∞≠ 0 for some k. The convergence follows from elliptic theory together with the simple fact that |b_k(ρ_i)| ≤ |ρ_i|^k which implies ∫_X |b_k(ρ_i)|^2/k≤ 1. For the nontriviality of the limit, by Lemma <ref>, we know |ρ_i|^2 ∼ |[ρ_i;ρ_i]|+|b_1(ρ_i)|^2 + ⋯ |b_n(ρ_i)|^2/n. Since by assumption [ρ_i;ρ_i]_L^2≤1/r_i which limits to zero while ρ_i_L^2=1, we know from the above that lim_i ∫ |[ρ_i;ρ_i]|+|b_1(ρ_i)|^2 + ⋯ |b_n(ρ_i)|^2/n∼ 1 while implies b_k^∞≠ 0 for some k≥ 1. Now we can take the spectral cover X_b^∞ where b^∞=(b_1^∞, ⋯ b^∞_n). Assume below Δ_b^∞≠ X i.e. the spectral cover X_b^∞ is generically regular semi-simple. For i large, the spectral cover X_b(ρ_i) is generically regular semi-simple. This follows from the elementary fact that b_k(ρ_i)→ b_k^∞. Thus the spectral cover converges to X_b(ρ_∞) at a generic point in a natural way. In particular, ρ_i is generally regular semi-simple for i large. Recall from the introduction, over the total space of the canonical bundle K_X, there exists a tautological line bundle ≅π^* K_X which has a global tautological section τ over K_X. Given any spectral cover π: X_b → X associated to b∈⊕_i=1^r H^0(X, K_X^⊗ i), we denote ^b=|_X_b. Then π_* ^b is a locally free sheaf of rank equal to r away from the discriminant locus Δ_b and it has a tautological section τ_b=π_*(τ|_X_b). Furthermore, it has a natural Hermitian metric induced by K_X away from Δ_b. Now we are ready to finish the proof of Theorem <ref> which is essentially similar to the rank two case combined with the general analytic estimates in Proposition <ref>. Fix any x∈ X ∖Δ_b_∞. By Lemma <ref>, for i large, near x, we can decompose (E, _A_i)=⊕_k _λ^k_i where λ^k_i is a holomorphic (n,0) form near x and ρ_i acts on _λ^k_i as tensoring by λ^k_i. We also know λ^k_i converge to some λ^k_∞∈ X_b^∞. Now the construction of σ_i: π_* ^b_∞→ End(E) ⊗ K_X is simply defined uniquely by the condition σ_i(λ^k_∞)=π_i^k ⊗λ^k_i over U_x where π_i^k denotes the orthogonal projections to _λ_i^k. Then such local σ^i can be glued together as a global section σ_i: π_* ^b_∞→ End(E) ⊗ K_X. Now it remains to prove σ_i satisfies the properties required which only needs to be checked locally. For this, denote the local holomorphic projections to _λ^k_i as p_i^k: →_λ^k_i. We know then locally near x as above σ_i τ^b_∞-ρ_i=λ_∞^i π_i^k-λ_∞^i π_i^k-(λ_i^k p^k_i-λ_i p^k_i). By Proposition <ref>, we know locally on a smaller open set |p_i^k| ≤ C, |π_i^k| ≤ C, and |π^k_i-p^k_i| 0, |∇_A_iπ^k_i| 0, |∇_A_i p_i^k| 0 as i→∞ near x. Combined with λ_i^k →λ_∞^k, we get the desired properties for σ_i over any compact set Y⊂ X∖Δ_b_∞. § EXAMPLES We will analyze some interesting examples of moduli spaces of solutions to the Vafa-Witten equations by focusing on the rank two case. It turns out the moduli space of (2) monopoles can be described explicitly while even on the trivial bundle, the moduli space of stable Higgs bundles could be quite rich. §.§ Moduli space of (2) monopoles In this section, we study some properties of the moduli space of monopoles. For this, we denote ^mon as the moduli space of monopoles mod gauge. Below we assume E=2 and A is trivial. An (2) monopole (A,ϕ) is defined as a solution to the Vafa-Witten equation on a fixed (2) bundle satisfying * A = 0; * (A,ϕ) is ^* invariant (see Definition <ref>). We start with the following which tells how to construct monopoles Suppose is a holomorphic line bundle over (X, ω) with c_1()>0. Then the Higgs bundle (= ⊕^-1, ϕ) where ϕ=[ 0 0; β 0 ] and 0≠β∈ H^0(X, K_X ⊗^-2) is a monopole. By Proposition <ref>, (, ϕ) is ^* invariant. It remains to show that (, ϕ) is stable. Suppose ' is a rank one subsheaf of invariant under ϕ. Then we observe the natural projection must map ' →^-1 nontrivially since ϕ maps to K_X⊗^-1 nontrivially. In particular, μ(')<0 and is stable. This finishes the proof. Now we have The SU(2) monopoles are exactly of the form (B, β) where * B is a unitary connection on a line bundle L with F^0,2_B=0; * F_B ∧ω^n-1/(n-1)! = β∧β̅. where β is a holomorphic section of K_X ⊗^-2 and =(L,_B). In particular, ∫ c_1(L)∧ω^n-1>0. Suppose (A,ϕ) is an (2)-monopole. By Proposition <ref>, for any t≠^*, there exists an isomorphism f: → so that ∇_A f=0 and f∘ϕ = t ϕ∘ f. We fix one point x and locally write ϕ(x)=M dz_1 ∧⋯ dz_n. where M is nilpotent. Since ϕ≠ 0., we can assume M takes the Jordan form [ 0 1; 0 0 ] by fixing a basis (v_1, v_2) for |_x. Then M fv_1 = 1/t f M v_1 =0, thus fv_1=λ v_1 for some λ≠ 0. Suppose fv_2=av_1+bv_2, then Mfv_2= aMv_1+bMv_2= b v_1 while Mfv_2=tfMv_2=tf v_1=tλ v_1 thus b=tλ. Choose λ≠ 1. We know f_t is diagonalizable. Using the eigenspace of f_t, we can decompose (E,A)=(L,B)⊕ (L^-1,-B) orthogonally where B is an integrable unitary connection on L. Furthermore, L and L^-1 corresponds to different eigenvalues of g. We arrange so that c_1(L).[ω]^n-1≥ 0. Under this splitting, it is straightforward to see that ϕ=[ 0 0; β 0 ] where β is a holomorphic section of ^-2⊗ K_X. The conclusion follows from a direct computation. Summarizing the above, we have ^mon={(⊕^-1, ϕ): 0≠ϕ∈ H^0(X, K_X ⊗^-2)}/∼ and ^mon is compact. In particular, it is disjoint from the compactification of the moduli space of Hermitian-Yang-Mills connections. The first part follows directly from Lemma <ref> and Proposition <ref> by noting that ∫ |β|^2=c_1(L).[ω]^n-1<∞. The second part follows from the simple fact that the Higgs fields of the monopoles are always nonzero. §.§ Stable trivial Higgs bundles In this section, we study the moduli space _0:={(⊕, ϕ) is stable : ϕ∈ H^0(X, (⊕) ⊗ K_X) }/∼. We first note The ^* invariant locus in _0 is empty. This directly follows from Proposition <ref> which tells that there are no monopoles in _0. In particular, the ^* invariant locus part must all have trivial Higgs bundles which is impossible by our stability condition. Below we fix a basis for H^0(X, K_X) as {σ_i}_i=1^m, then by definition, we can always write ϕ=∑_i=1^m M_i σ_i where M_i are all rank two matrices. We have the following simple observation (⊕, ϕ) is stable if and only if M_1, ⋯ M_m have no common eigenvectors. Given a stable Higgs bundle (⊕, ϕ), suppose v is a common eigenvector for M_1, ⋯ M_m. Then consider →⊕, 1 ↦ v which gives a rank one subsheaf of ⊕ invariant under ϕ. This violates the stability of (⊕, ϕ). Now assume (⊕, ϕ) is not stable, i.e. there exists some rank one subsheaf ⊂⊕ invariant under ϕ with c_1().[ω]^n-1=0. By considering (⊕)^* →^* which is nontrivial, we know ^* has a section, thus ^* must be trivial and so is . Then is generated by a constant vector valued function which we denote by v invariant under ϕ. Then ϕ v = σ v for some σ∈ H^0(X, K_X). Write σ=∑_i λ_i σ_i. We know M_i v= σ_i v for any i i.e. {M_i}_i=1^m have no common eigenvectors. We have the following well-known linear algebra lemma Two complex rank two matrices M and N have no common eigenvectors if and only if (MN-NM)≠ 0. Suppose (MN-NM) ≠ 0 then obviously M and N have no common eigenvectors. Otherwise, suppose Mv=λ v and Nv=λ'v for some v ≠ 0, then (MN-NM)v=0 thus (MN-NM)=0. For the other direction, suppose now that (MN-NM)=0, we want to show that M and N have a common eigenvector. WLOG, we can assume A is in its Jordan form. When A is a scaling, this is trivial. When M has two different eigenvalues, it follows from the computation that N is either upper-triangular or lower-triangular which tells either [ 1; 0 ] or [ 0; 1 ] is the common eigen-vector. Now we assume M is of the form [ λ 1; 0 λ ] and a direct computation shows N takes the form [ a b; 0 d ] thus they have a common eigenvector [ 1; 0 ]. The conclusion follows. In particular, this gives (⊕, ϕ=M σ_1+N σ_2) is stable if and only if (MN-NM)≠ 0.
http://arxiv.org/abs/2307.02199v1
20230705105132
Estimating mean profiles and fluxes in high-speed turbulent boundary layers using inner/outer-layer transformations
[ "Asif Manzoor Hasan", "Johan Larsson", "Sergio Pirozzoli", "Rene Pecnik" ]
physics.flu-dyn
[ "physics.flu-dyn" ]
On the Adversarial Robustness of Generative Autoencoders in the Latent Space [ August 1, 2023 ============================================================================ § NOMENCLATURE @l @ = l@ @l @ = l@ τ_w wall shear stress Re_τ ρ_w u_τδ/μ_w, friction Reynolds number ρ density Re_τ^* ρ̅u_τ^* δ/μ̅, semi-local friction Reynolds number μ dynamic viscosity M_τ u_τ/√(γ R T_w), friction Mach number k thermal conductivity M_∞ u_∞/√(γ R T_∞), free-stream Mach number c_p specific heat capacity at constant pressure r recovery factor γ specific heat capacity ratio c_f 2τ_w/(ρ_∞ u_∞^2), skin-friction coefficient T temperature c_h q_w/(c_pρ_∞ u_∞ (T_w-T_r)), heat-transfer coefficient R specific gas constant q_w wall heat flux u_τ √(τ_w/ρ_w), friction velocity y wall-normal coordinate u_τ^* √(τ_w/ρ̅), semi-local friction velocity y^* y/δ_v^*, semi-local wall-normal coordinate δ_v μ_w/(ρ_w u_τ), viscous length scale Pr μ c_p/ k, Prandtl number δ_v^* μ̅/(ρ̅u_τ^*), semi-local viscous length scale κ von Kármán constant δ (or δ_99), boundary layer thickness Π Coles' wake parameter θ momentum thickness C log-law intercept δ^* displacement thickness μ_t eddy viscosity U transformed (incompressible) velocity u untransformed velocity Re_θ ρ_∞ u_∞θ/μ_∞, momentum thickness Reynolds number Re_δ^* ρ_∞ u_∞δ^*/μ_∞, displacement thickness Reynolds number Re_δ_2 ρ_∞ u_∞θ/μ_w 2@lSubscripts w wall ∞ free-stream e boundary layer edge (y=δ) r recovery 2@lSuperscripts + wall-scaled (·) Reynolds averaging § INTRODUCTION Accurately predicting drag and heat transfer for compressible high-speed flows is of utmost importance for a range of engineering applications. This requires the precise knowledge of the entire velocity and temperature profiles. A common approach is to use compressible velocity scaling laws (transformation), that inverse transform the velocity profile of an incompressible flow, together with a temperature-velocity relation. Current methods <cit.> typically assume a single velocity scaling law, neglecting the different scaling characteristics of the inner and outer layers. In this Note, we use distinct velocity transformations for these two regions. In the inner layer, we utilize a recently proposed scaling law that appropriately incorporates variable property and intrinsic compressibility effects <cit.>, while the outer layer profile is inverse-transformed with the well-known Van Driest transformation <cit.>. The result is an analytical expression for the mean shear valid in the entire boundary layer, which combined with the temperature-velocity relationship in <cit.>, provides predictions of mean velocity and temperature profiles at unprecedented accuracy. Using these profiles, drag and heat transfer is evaluated with an accuracy of +/-4% and +/-8%, respectively, for a wide range of compressible turbulent boundary layers up to Mach numbers of 14. § PROPOSED METHOD An incompressible velocity profile is composed of two parts: (1) the law of the wall in the inner layer, and (2) the velocity defect law in the outer layer. We can model the law of the wall either by composite velocity profiles <cit.>, or by integrating the mean momentum equation using a suitable eddy viscosity model <cit.>. Here, we follow the latter approach and utilize the Johnson-King <cit.> eddy viscosity model. Likewise, there are several formulations available to represent the defect law <cit.>, of which we use Coles’ law of the wake <cit.>. Once the reference incompressible velocity profile is obtained, we inverse transform it using our recently proposed velocity transformation <cit.> for the inner layer, and the Van Driest (VD) transformation <cit.> for the outer layer. They are combined as follows: du̅^+ = f_3^-1 f_2^-1 f_1^-1 d U̅^+_inner + f_1^-1d U̅^+_wake, where the factors f_1, f_2, and f_3 constitute the transformation kernel proposed in <cit.> that accounts for both variable property and intrinsic compressibility effects, given as dU^+/d u^+= (1 + κ y^* D(M_τ)/1 + κy^*D(0))_f_3(1 - y/δ_v^*d δ_v^*/dy)_f_2√(ρ/ρ_w)_f_1 , where D(M_τ) = [1 - exp(-y^*/A^+ + f(M_τ))]^2. The value of A^+ differs based on the choice of the von Kármán constant κ, such that the log-law intercept is reproduced for that κ <cit.>. With κ=0.41, the value of A^+=17 gives a log-law intercept of 5.2 <cit.>, whereas, with κ = 0.384, A^+=15.22 gives a log-law intercept of 4.17. The additive term f(M_τ) accounts for intrinsic compressibility effects. <cit.> proposed f(M_τ) = 19.3 M_τ, that is independent of the chosen value of κ. In Eq. (<ref>), dU̅^+_inner is modeled using the Johnson-King eddy viscosity model as dy^*/[1+κ y^* D(0)], which after integration, recovers the incompressible law of the wall, and dU̅^+_wake = Π/κsin (π y/δ) π d(y/δ) is the derivative of the Coles' wake function <cit.>. Inserting the expressions for dU̅^+_inner, dU̅^+_wake in Eq. (<ref>), using d y^* / dy = f_2/δ_v^*, u_τ^* = u_τ f_1^-1, and upon rearrangement, we get the dimensional form of the mean velocity gradient as du̅/dy = u_τ^*/δ_v^*1/1+κ y^* D(M_τ) + u_τ^*/δ Π/κ π sin(πy/δ). Eq. (<ref>) provides several useful insights. Analogous to an incompressible flow, the mean velocity in a compressible flow is controlled by two distinct length scales, δ_v^* and δ, characteristic of the inner and outer layers, respectively. The two layers are connected by a common velocity scale u_τ^* (the semi-local friction velocity), leading to a logarithmic law in the overlap region between them. Moreover, in the overlap layer, the denominator of the first term on the right-hand side reduces to κ y, consistent with Townsend's attached-eddy hypothesis. The second term on the right-hand-side is the wake term accounting for mean density variations, where Coles' wake parameter Π depends on the Reynolds number, as discussed in the subsection below. §.§ Characterizing low-Reynolds-number effects on the wake parameter For incompressible boundary layers, Coles’ wake parameter is known to strongly depend on Re_θ at low Reynolds numbers <cit.>. For compressible boundary layers, the ambiguity of the optimal Reynolds number definition poses a challenge to characterize the wake parameter. <cit.>, mainly using experimental data at that time, observed that the momentum-thickness Reynolds number with viscosity at the wall (Re_δ_2) is the suitable definition to scale Π. However, intuitively, Π should scale with Reynolds number based on the free-stream properties <cit.>. Given the recent availability of Direct Numerical Simulation (DNS) data at moderate Reynolds numbers for both compressible and incompressible flows, we revisit the question of which Reynolds number best describes the wake parameter. First, we evaluate Π for several incompressible and compressible DNS cases from the literature and then report it as a function of different definitions of the Reynolds number, searching for the definition yielding the least spread of the data points. For incompressible flows, the wake strength can be determined as Π = 0.5κ( U̅^+(y=δ) - 1/κln(δ^+) - C), where C is the log-law intercept for the chosen κ. For compressible flows, the wake strength is based on the VD transformed velocity <cit.> as Π = 0.5 κ( U̅_vd^+(y=δ) - (U̅_vd^+)^log(y=δ) ), where U̅_vd^+ is obtained from the DNS data. The reference log law (U̅_vd^+)^log, unlike for incompressible flows, cannot be computed as 1/κln(y^+) + C_vd, because C_vd is found to be non-universal for diabatic compressible boundary layers <cit.>. Hence, (U̅_vd^+)^log can be obtained either by fitting a logarithmic curve to U̅_vd^+ <cit.>, or by inverse transforming the incompressible law of the wall. Here, we follow the latter approach by using the compressibility transformation of <cit.>. The value of the von Kármán constant κ plays a crucial role in estimating Π. <cit.> noted that a strong consensus on κ is needed to accurately estimate Π. However, such a consensus is yet missing <cit.>. Recently, <cit.> showed that κ = 0.384 is a suitable choice for incompressible boundary layers, verified to be true also for channels <cit.> and pipes <cit.>. However, due to historical reasons and wide acceptance of κ=0.41, we will proceed with this value. The same procedure can straightforwardly be repeated with a different value of κ. Figure <ref> shows the wake parameter for twenty-six compressible and nineteen incompressible boundary layer flows, as a function of Re_δ_2, Re_θ, Re_δ^* and Re_τ_∞^*. The spread in the data points is found to be quite large for all the definitions, as Π is the difference of two relatively large quantities, namely U̅_vd^+ and (U̅_vd^+)^log at the boundary layer edge, as outlined above. Note that even incompressible boundary layers are not devoid of this scatter <cit.>. Figure <ref>(a) shows the presence of two distinct branches, hence Re_δ_2 does not seem to be suitable to characterize Π, unlike reported in previous literature <cit.>. Among the four definitions of Reynolds number, Re_θ seems to show the least spread. Figure <ref>(b) also reports several functional forms of Π = f(Re_θ). Use of the modified Kármán-Schoenherr friction formula <cit.> for indirect evaluation of Π, does not show saturation at high Reynolds numbers as observed in <cit.> for incompressible flows. The Cebeci-Smith relation <cit.> underpredicts Π, but reproduces saturation at high Reynolds numbers. We thus propose a relation similar to that proposed by <cit.>, with modified constants to achieve a better fit with data from recent incompressible DNS <cit.>. The relation is Π = 0.69 [1 - exp(-0.243 √(z) - 0.15 z)], where z = Re_θ/425 - 1. Inset in Fig. <ref>(b) compares the skin-friction curve computed using Eq. (<ref>) with the modified Kármán-Schoenherr skin-friction formula <cit.>. The distance between the two curves is large at low Reynolds numbers, but less so at higher Reynolds numbers. As expected, the incompressible DNS data of <cit.>, follow the friction curve computed using Eq. (<ref>). §.§ Implementation of the proposed method For convenience, Eq. (<ref>) can also be expressed in terms of the dimensional variables τ_w, μ̅ and ρ̅ as, du̅/dy = τ_w/μ̅+ √(τ_w ρ̅)κ y D(M_τ)_μ_t + √(τ_w/ρ̅)/δ Π/κ π sin(πy/δ), where μ_t is the Johnson-King eddy viscosity model corrected for intrinsic compressibility effects, derived from <cit.> transformation. It can be readily used in turbulence modeling, for instance, as a wall-model in Large Eddy Simulations. Note that different eddy viscosity models can be used in Eq. (<ref>), for example, Prandtl's mixing length model (see Appendix A). Eq. (<ref>) covers the entire boundary layer, and it can be integrated in conjunction with a suitable temperature model such as the one proposed by <cit.>, which is given as T̅/T_w =1+T_r-T_w/T_w[(1-s Pr)(u̅/u_∞)^2+s Pr(u̅/u_∞)]+T_∞-T_r/T_w(u̅/u_∞)^2, where s Pr=0.8, T_r/T_∞ = 1 + 0.5 r (γ-1)M_∞^2, and r=Pr^1/3. Moreover, a suitable viscosity law (e.g., power or Sutherland's law), and the ideal gas equation of state ρ̅/ρ_w = T_w/T̅ have to be used to compute mean viscosity and density profiles, respectively. The inputs that need to be provided are the Reynolds number (Re_θ), free-stream Mach number (M_∞), wall cooling/heating parameter (T_w/T_r) and (optionally) the dimensional wall or free-stream temperature for Sutherland's law. It is important to note that Eq. (<ref>), and all solver inputs are based on the quantities in the free-stream, and not at the boundary layer edge. For more insights on the solver, please refer to the source code available on GitHub <cit.>. § RESULTS Figure <ref> shows the predicted velocity and temperature profiles for a selection of high Mach number cases. As can be seen, the DNS and the predicted profiles are in good agreement, thus corroborating our methodology. The insets in Figure <ref> show the error in the predicted skin-friction and heat-transfer coefficients for thirty compressible cases from the literature. For most cases, the friction coefficient (c_f) is predicted with +/-4% accuracy, with a maximum error of -5.3%. The prediction of the heat-transfer coefficient (c_h) shows a slightly larger error compared to c_f, potentially due to additional inaccuracies arising from the temperature-velocity relation. In most cases, c_h is predicted with +/-8% accuracy, with a maximum error of 10.3%. The proposed method is modular in that it can also be applied using other inner-layer transformations <cit.> with minor modifications as discussed in Appendix B. This is shown in Figure <ref>, which compares the proposed approach with another modular approach of <cit.>, both with different inner layer transformations. Additionally, the figure includes results obtained with the method of <cit.> using the VD transformation, and the widely recognized Van Driest II skin-friction formula <cit.>. Figure <ref> also shows the root-mean-square error, determined as RMS = √(1/N∑ε_c_f^2), where N is the total number of DNS cases considered. The Van Driest II formula and the method of have similar RMS error of about 6% ['s method with the more accurate temperature velocity relation in <cit.> leads to an RMS error of 12%.], which is not surprising as both of them are built on Van Driest's mixing-length arguments. The errors are selectively positive for majority of the cases, and it increases with higher Mach number and stronger wall cooling. The source of this error mainly resides in the inaccuracy of the VD velocity transformation in the near-wall region for diabatic flows. To eliminate this shortcoming, <cit.> developed a modular methodology, which is quite accurate when the transformation of <cit.> is used, but it is less accurate if other velocity transformations are implemented. This inaccuracy is because the outer layer velocity profile is also inverse-transformed according to the inner-layer transformation. In the current approach, the velocity profile is instead inverse-transformed using two distinct transformations, which take into account the different scaling properties of the inner and outer layers, thus reducing the RMS error with respect to Kumar and Larsson's modular method for all the transformations tested herein. The error using the proposed approach with the TL transformation is preferentially positive for all the cases. This is due to the log-law shift observed in the TL scaling, which is effectively removed in the transformation, thereby yielding an RMS error of 2.66%, which is the lowest among all approaches. § CONCLUSIONS We have derived an expression for the mean velocity gradient in high-speed boundary layers [Eq. (<ref>)] that combines the inner-layer transformation recently proposed by <cit.> and the <cit.> outer-layer transformation, thus covering the entire boundary layer. The Coles' wake parameter in this expression is determined using an adjusted Cebeci and Smith relation [Eq. (<ref>)] with the definition of Re_θ as the most suitable parameter to characterize low-Reynolds-number effects on Π. This method allows remarkably accurate predictions of the mean velocity and temperature profiles, leading to estimation of the friction and heat-transfer coefficients which are within +/-4% and +/-8% error bounds with respect to DNS data, respectively. The skin-friction results are compared with that from other state-of-the-art approaches considered in literature. Limited accuracy of the VD II and 's methods is attributable to inaccuracy of the underlying VD transformation in the near-wall region of diabatic boundary layers, whereas inaccuracy in 's approach is attributable to the unsuitability of the inner layer velocity transformations in the outer layer. By combining different scaling laws in the inner layer with the Van Driest transformation in the outer layer, our method demonstrates improved results for all the inner-layer transformations herein tested, with the lowest RMS error of 2.66% achieved with 's transformation. The methodology developed in this note promises straightforward application to other classes of wall-bounded flows like channels and pipes, upon change of the temperature-velocity relation <cit.>, and using different values of the wake parameter Π <cit.>. Also, the method is modular in the sense that it can be used with other temperature models and equations of state. § ACKNOWLEDGEMENTS We thank Dr. P. Costa for the insightful discussions. This work was supported by the European Research Council grant no. ERC-2019-CoG-864660, Critical; and the Air Force Office of Scientific Research under grants FA9550-19-1-0210 and FA9550-19-1-7029. § APPENDIX A: MEAN SHEAR USING PRANDTL'S MIXING LENGTH MODEL The choice of the eddy viscosity model affects the first term on the right hand side of Eq. (<ref>). By analogy, the mean shear equation using Prandtl's mixing length model is thus as follows, du̅/dy = 2 τ_w/μ̅+ √(μ̅^2+[2√(τ_wρ̅)κ y D(M_τ)]^2) + √(τ_w/ρ̅)/δ Π/κ π sin(πy/δ), where D(M_τ) is the damping function corrected for intrinsic compressibility effects as D(M_τ) = 1 - exp(-y^*/A^+ + 39 M_τ), with A^+ = 25.53 (or 26) for κ = 0.41, and where the additive term 39 M_τ is obtained following similar steps as for the Johnson-King model (see Ref. <cit.>). § APPENDIX B: IMPLEMENTATION OF THE METHOD USING VELOCITY TRANSFORMATIONS IN REF. <CIT.> In the logarithmic region and beyond, the first term on the right-hand side of Eq. (<ref>) reduces to √(τ_w/ρ̅)/(κ y), which is the same as Van Driest's original arguments <cit.>. It is crucial to satisfy this condition, otherwise the logarithmic profile extending to the outer layer would not obey Van Driest's scaling. The transformations of <cit.> fail to satisfy this property. To address this issue, we enforce Van Driest's scaling in the outer layer by modifying Eq. (<ref>) as follows y^+_T ≤ 50 : du̅^+ = 𝒯^-1 dU̅^+_inner+ f_1^-1d U̅^+_wake, y^+_T > 50 : du̅^+ = f_1^-1 dU̅^+_inner + f_1^-1d U̅^+_wake, where 𝒯 denotes the inner-layer transformation kernel and y^+_T is the transformed coordinate. The value of 50 is taken arbitrarily as a start of the logarithmic region. 39 urlstyle [Huang et al.(1993)Huang, Bradshaw, and Coakley]huang1993skin Huang, P., Bradshaw, P., and Coakley, T., Skin friction and velocity profile family for compressible turbulentboundary layers, AIAA journal, Vol. 31, No. 9, 1993, pp. 1600–1604. [Kumar and Larsson(2022)]kumar2022modular Kumar, V., and Larsson, J., Modular Method for Estimation of Velocity and Temperature Profiles in High-Speed Boundary Layers, AIAA Journal, Vol. 60, No. 9, 2022, pp. 5165–5172. [Hasan et al.(2023)Hasan, Larsson, Pirozzoli, and Pecnik]hasan2023incorporating Hasan, A. M., Larsson, J., Pirozzoli, S., and Pecnik, R., Incorporating intrinsic compressibility effects in velocity transformations for wall-bounded turbulent flows, , 2023. [Van Driest(1951)]van1951turbulent Van Driest, E. R., Turbulent boundary layer in compressible fluids, Journal of the Aeronautical Sciences, Vol. 18, No. 3, 1951, pp. 145–160. [Zhang et al.(2014)Zhang, Bi, Hussain, and She]zhang2014generalized Zhang, Y.-S., Bi, W.-T., Hussain, F., and She, Z.-S., A generalized Reynolds analogy for compressible wall-bounded turbulent flows, Journal of Fluid Mechanics, Vol. 739, 2014, pp. 392–420. [Musker(1979)]musker1979explicit Musker, A., Explicit expression for the smooth wall velocity distribution in a turbulent boundary layer, AIAA Journal, Vol. 17, No. 6, 1979, pp. 655–657. [Chauhan et al.(2007)Chauhan, Nagib, and Monkewitz]chauhan2007composite Chauhan, K., Nagib, H., and Monkewitz, P., On the composite logarithmic profile in zero pressure gradient turbulent boundary layers, 45th AIAA Aerospace Sciences Meeting and Exhibit, 2007, p. 532. [Nagib and Chauhan(2008)]nagib2008variations Nagib, H. M., and Chauhan, K. A., Variations of von Kármán coefficient in canonical flows, Physics of fluids, Vol. 20, No. 10, 2008, p. 101518. [Van Driest(1956a)]van1956turbulent Van Driest, E. R., On turbulent flow near a wall, Journal of the aeronautical sciences, Vol. 23, No. 11, 1956a, pp. 1007–1011. [Johnson and King(1985)]johnson1985mathematically Johnson, D. A., and King, L., A mathematically simple turbulence closure model for attached and separated turbulent boundary layers, AIAA journal, Vol. 23, No. 11, 1985, pp. 1684–1692. [Coles(1956)]coles1956law Coles, D., The law of the wake in the turbulent boundary layer, Journal of Fluid Mechanics, Vol. 1, No. 2, 1956, pp. 191–226. [Zagarola and Smits(1998)]zagarola1998new Zagarola, M. V., and Smits, A. J., A new mean velocity scaling for turbulent boundary layers, Proceedings of FEDSM, Vol. 98, 1998, pp. 21–25. [Fernholz and Finleyt(1996)]fernholz1996incompressible Fernholz, H., and Finleyt, P., The incompressible zero-pressure-gradient turbulent boundary layer: an assessment of the data, Progress in Aerospace Sciences, Vol. 32, No. 4, 1996, pp. 245–311. [Iyer and Malik(2019)]iyer2019analysis Iyer, P. S., and Malik, M. R., Analysis of the equilibrium wall model for high-speed turbulent flows, Physical Review Fluids, Vol. 4, No. 7, 2019, p. 074604. [Schlatter et al.(2009)Schlatter, Örlü, Li, Brethouwer, Fransson, Johansson, Alfredsson, and Henningson]schlatter2009turbulent Schlatter, P., Örlü, R., Li, Q., Brethouwer, G., Fransson, J. H., Johansson, A. V., Alfredsson, P. H., and Henningson, D. S., Turbulent boundary layers up to Re θ= 2500 studied through simulation and experiment, Physics of fluids, Vol. 21, No. 5, 2009, p. 051702. [Schlatter and Örlü(2010)]schlatter2010assessment Schlatter, P., and Örlü, R., Assessment of direct numerical simulation data of turbulent boundary layers, Journal of Fluid Mechanics, Vol. 659, 2010, pp. 116–126. [Jiménez et al.(2010)Jiménez, Hoyas, Simens, and Mizuno]jimenez2010turbulent Jiménez, J., Hoyas, S., Simens, M. P., and Mizuno, Y., Turbulent boundary layers and channels at moderate Reynolds numbers, Journal of Fluid Mechanics, Vol. 657, 2010, pp. 335–360. [Sillero et al.(2013)Sillero, Jiménez, and Moser]sillero2013one Sillero, J. A., Jiménez, J., and Moser, R. D., One-point statistics for turbulent wall-bounded flows at Reynolds numbers up to δ^+= 2000, Physics of Fluids, Vol. 25, No. 10, 2013, p. 105102. [Zhang et al.(2018)Zhang, Duan, and Choudhari]zhang2018direct Zhang, C., Duan, L., and Choudhari, M. M., Direct numerical simulation database for supersonic and hypersonic turbulent boundary layers, AIAA Journal, Vol. 56, No. 11, 2018, pp. 4297–4311. [Bernardini and Pirozzoli(2011)]bernardini2011wall Bernardini, M., and Pirozzoli, S., Wall pressure fluctuations beneath supersonic turbulent boundary layers, Physics of Fluids, Vol. 23, No. 8, 2011, p. 085102. [Cogo et al.(2022)Cogo, Salvadore, Picano, and Bernardini]cogo2022direct Cogo, M., Salvadore, F., Picano, F., and Bernardini, M., Direct numerical simulation of supersonic and hypersonic turbulent boundary layers at moderate-high Reynolds numbers and isothermal wall condition, Journal of Fluid Mechanics, Vol. 945, 2022, p. A30. [Ceci et al.(2022)Ceci, Palumbo, Larsson, and Pirozzoli]ceci2022numerical Ceci, A., Palumbo, A., Larsson, J., and Pirozzoli, S., Numerical tripping of high-speed turbulent boundary layers, Theoretical and Computational Fluid Dynamics, Vol. 36, No. 6, 2022, pp. 865–886. [Coles(1962)]coles1962turbulent Coles, D., The turbulent boundary layer in a compressible fluid. RAND Corp., Rep, Tech. rep., R-403-PR, 1962. [Cebeci and Smith(1974)]cebeci2012analysis Cebeci, T., and Smith, A. M. O., Analysis of turbulent boundary layers, Elsevier, 1974. [Fernholz and Finley(1980)]fernholz1980critical Fernholz, H.-H., and Finley, P., A critical commentary on mean flow data for two-dimensional compressible turbulent boundary layers, Tech. rep., AGARD-AG-253, 1980. [Smits and Dussauge(2006)]smits2006turbulent Smits, A. J., and Dussauge, J.-P., Turbulent shear layers in supersonic flow, Springer Science & Business Media, 2006. [Bradshaw(1977)]bradshaw1977compressible Bradshaw, P., Compressible turbulent shear layers, Annual Review of Fluid Mechanics, Vol. 9, No. 1, 1977, pp. 33–52. [Trettel and Larsson(2016)]trettel2016mean Trettel, A., and Larsson, J., Mean velocity scaling for compressible wall turbulence with heat transfer, Physics of Fluids, Vol. 28, No. 2, 2016, p. 026102. [Spalart(1988)]spalart1988direct Spalart, P. R., Direct simulation of a turbulent boundary layer up to Rθ= 1410, Journal of fluid mechanics, Vol. 187, 1988, pp. 61–98. [Monkewitz and Nagib(2023)]monkewitz2023hunt Monkewitz, P. A., and Nagib, H. M., The hunt for the Kármán "constant” revisited, , 2023. [Lee and Moser(2015)]lee2015direct Lee, M., and Moser, R. D., Direct numerical simulation of turbulent channel flow up to, Journal of fluid mechanics, Vol. 774, 2015, pp. 395–415. [Pirozzoli et al.(2021)Pirozzoli, Romero, Fatica, Verzicco, and Orlandi]pirozzoli2021one Pirozzoli, S., Romero, J., Fatica, M., Verzicco, R., and Orlandi, P., One-point statistics for turbulent pipe flow up to, Journal of fluid mechanics, Vol. 926, 2021. [Nagib et al.(2007)Nagib, Chauhan, and Monkewitz]nagib2007approach Nagib, H. M., Chauhan, K. A., and Monkewitz, P. A., Approach to an asymptotic state for zero pressure gradient turbulent boundary layers, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, Vol. 365, No. 1852, 2007, pp. 755–770. [Pecnik and Hasan(2023)]jupnotebook Pecnik, R., and Hasan, A. M., A jupyter notebook to estimate mean profiles and fluxes for high-speed boundary layers, , 2023. <https://github.com/Fluid-Dynamics-Of-Energy-Systems-Team/DragandHeatTransferEstimation.git>. [Griffin et al.(2021)Griffin, Fu, and Moin]griffin2021velocity Griffin, K. P., Fu, L., and Moin, P., Velocity transformation for compressible wall-bounded turbulent flows with and without heat transfer, Proceedings of the National Academy of Sciences, Vol. 118, No. 34, 2021, p. e2111144118. [Volpiani et al.(2020)Volpiani, Iyer, Pirozzoli, and Larsson]volpiani2020data Volpiani, P. S., Iyer, P. S., Pirozzoli, S., and Larsson, J., Data-driven compressibility transformation for turbulent wall layers, Physical Review Fluids, Vol. 5, No. 5, 2020, p. 052602. [Van Driest(1956b)]van1956problem Van Driest, E. R., The problem of aerodynamic heating, Institute of the Aeronautical Sciences, 1956b. [Huang et al.(2020)Huang, Nicholson, Duan, Choudhari, and Bowersox]huang2020simulation Huang, J., Nicholson, G. L., Duan, L., Choudhari, M. M., and Bowersox, R. D., Simulation and modeling of cold-wall hypersonic turbulent boundary layers on flat plate, AIAA Scitech 2020 Forum, 2020, p. 0571. [Song et al.(2022)Song, Zhang, Liu, and Xia]song2022central Song, Y., Zhang, P., Liu, Y., and Xia, Z., Central mean temperature scaling in compressible turbulent channel flows with symmetric isothermal boundaries, Physical Review Fluids, Vol. 7, No. 4, 2022, p. 044606.
http://arxiv.org/abs/2307.02924v1
20230706112340
The Emotional Dilemma: Influence of a Human-like Robot on Trust and Cooperation
[ "Dennis Becker", "Diana Rueda", "Felix Beese", "Brenda Scarleth Gutierrez Torres", "Myriem Lafdili", "Kyra Ahrens", "Di Fu", "Erik Strahl", "Tom Weber", "Stefan Wermter" ]
cs.RO
[ "cs.RO", "cs.HC" ]
The Emotional Dilemma: Influence of a Human-like Robot on Trust and Cooperation Dennis Becker^*, Diana Rueda^*, Felix Beese, Brenda Scarleth Gutierrez Torres^†, Myriem Lafdili, Kyra Ahrens, Di Fu, Erik Strahl, Tom Weber, Stefan Wermter ^*These authors contributed equally. ^†Stipendiary from CONACyT and DAAD The authors gratefully acknowledge partial support from the German Research Foundation DFG under project CML, TRR169 and LeCareBot. All of the authors are with the Knowledge Technology Group, Department of Informatics, Universität Hamburg, Vogt-Kölln-Straße 30, Hamburg D-22527, Germany. Many thanks to Moritz Lahann, Ramtin Nouri, Jose Angel Sanchez Castro, Sebastian Stelter, and Denyse Uwase. August 1, 2023 ======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Increasing anthropomorphic robot behavioral design could affect trust and cooperation positively. However, studies have shown contradicting results and suggest a task-dependent relationship between robots that display emotions and trust. Therefore, this study analyzes the effect of robots that display human-like emotions on trust, cooperation, and participants' emotions. In the between-group study, participants play the coin entrustment game with an emotional and a non-emotional robot. The results show that the robot that displays emotions induces more anxiety than the neutral robot. Accordingly, the participants trust the emotional robot less and are less likely to cooperate. Furthermore, the perceived intelligence of a robot increases trust, while a desire to out-compete the robot can reduce trust and cooperation. Thus, the design of robots expressing emotions should be task dependent to avoid adverse effects that reduce trust and cooperation. 16cm(2cm,1cm) 16cm In: 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), Busan, Korea, 28 Aug - 31 Aug, 2023. 16cm(2.5cm,26.1cm) 16cm © 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. § INTRODUCTION Trust and cooperation affect humans' willingness to interact with robots <cit.>, and are essential to establish successful human-robot interaction <cit.>. Human-robot trust is composed of human-related, robot-related, and environment-related factors <cit.>. To the human-related factors also belong emotions, which can influence trust <cit.> and have been suggested as an integral part of forming trust <cit.>. Humans' emotional states have been shown to affect trust in human-robot interaction <cit.>, where positive emotions such as happiness increase trust <cit.> and negative emotions such as anxiety reduce trust <cit.>. Likewise, trust in a robot influences the willingness to cooperate with the robot <cit.>, which can be more pronounced in cooperative tasks <cit.>. Cooperation is frequently associated with an initial investment and trust in the collaborator in expectation of a greater mutual benefit <cit.>. While robots are becoming steadily human-like, research has been conducted on how anthropomorphic a robot should be designed for a specific task <cit.>. Studies suggest that robots that act more human-like by expressing emotions are perceived as more trustworthy <cit.> and encourage cooperation <cit.>. Specifically, trust is fostered when the robot's expressed emotions are congruent and engages in social conversation <cit.>. Contrary, to this relationship of increased anthropomorphism and trust, recent findings suggest a task dependency <cit.>, where a robot expressing social cues for critical tasks receives less trust than a neutral robot <cit.>. Thus, research on human-like robots and their effect on trust and cooperation with respect to emotions is required. For measuring trust and cooperation in human-robot interaction, we adopt a variation of the prisoner's dilemma, the coin entrustment game <cit.>. The coin entrustment game allows participants to decide on the amount of trust in the robot in expectation of an increased reward. In the conducted between-group experiment, an emotional robot is compared to a non-emotional robot. While the non-emotional robot portrays neutral emotions, the emotional robot displays happiness or sadness, depending on the participant's cooperation. The emotions are conveyed utilizing multiple modalities, such as speech interjections that deploy emotional prosody <cit.>, body gestures <cit.>, and facial expressions <cit.>. § RELATED WORK The prisoner's dilemma is a game theory-based <cit.> social dilemma that has frequently been used to study trust and cooperation in social situations <cit.>. The initial definition accounts for one round, where defecting is the preferred strategy to avoid punishment. However, experiments could not reliably explain the emergence of participants cooperating <cit.>. Thus, the repeated prisoner's dilemma <cit.> was suggested, where the game is played over consecutive rounds. With an undisclosed number of rounds, a cooperative strategy is encouraged and leads to a superior outcome for both players <cit.>. Since the prisoner's dilemma has a predefined payout matrix, a coin entrustment variant of the game was proposed <cit.>. In the coin entrustment game, a player's pay off and risk are dependent on the amount of entrusted coins, which allows measuring trust in the other player to cooperate and own cooperation separately. Research on social dilemmas suggests that humans act more selfishly and less emotional when interacting with artificial agents and robots <cit.>. When facing an artificial agent, humans tend to derive decisions more rational <cit.>, and the influence of emotions on decision-making is mitigated <cit.>. These findings are supported by physiological measures of skin conductivity and heart rate in a betting scenario against a computerized agent and a human <cit.>. Studies on the prisoner's dilemma in human-robot interaction suggest that cooperative behavior with humans tends to be higher than with robots <cit.>. When cooperating with a robot, self-reported measures indicate that decision-making is less emotionally driven and imply a notable lack of empathy that could result in a more selfish behavior <cit.>. However, when the robot is facing consequences for losing in the prisoner's dilemma, such as erasing its memory, individuals' game-play becomes more empathetic towards the robot <cit.>. Similarly, a robot expressing moral values leads to less competitive game-play, while a robot expressing emotions can increase competitive behaviour <cit.>. Contradicting these cooperative preferences, research on the n-player prisoner's dilemma with human and robotic players shows that cooperation with robotic players is higher, potentially due to the unpredictability of human players' strategy <cit.>. Correspondingly, for the coin entrustment game, more trust towards robotic players is reported <cit.>. Non-verbal communication signals and indicates willingness to cooperate in social dilemma and trust games <cit.>. People who are likely to cooperate are emotionally more expressive than people that are inclined to defect <cit.>. Additionally, research has shown that humans are sensitive to emotional display and facial expressions when evaluating a person's cooperativeness <cit.>. Specifically, smiling can evoke and indicate cooperation and is perceived as an indicator of the other's trustworthiness <cit.>, whereas contemptuous behavior is associated with defect <cit.>. Likewise, body motions <cit.> and gestures <cit.> of non-verbal communication are utilized to judge the partner's tendency to cooperate. Moreover, vocal features affect trustworthiness and cooperation <cit.>, where emotional expressiveness <cit.> and a happy-sounding voice can foster trust <cit.>. Further, congruence between a robot's behavior and its voice influences trust <cit.> and convergence of a robot's speech influences cooperation <cit.>. Despite the effect of non-verbal communication, studies on the prisoner's dilemma with a virtual agent suggest, that nonverbal behavior might be too subtle to be recognized <cit.>. Subsequently, no difference in the cooperation between a robot expressing sad and angry emotions could be shown; however, recognizing the robot's emotions can reduce the participants' cooperation <cit.>. § METHODOLOGY §.§ Participants After receiving a positive response from the Ethics Commission of the Department of Informatics at the University of Hamburg, participants were recruited from the university's campus, and the experiment was conducted with 47 participants. Out of these participants, three were excluded due to technical issues, and an additional three participants were discarded based on the control question. As control question, the participants rated the robot's emotionality on a 5-point scale ranging from Emotional to Non-Emotional. Participants who considered the non-emotional robot as Emotional, and participants that labeled the emotional robot as Non-Emotional were excluded. This results in a data set of 41 participants for analysis, consisting of 20 participants in the emotional and 21 participants in the non-emotional group. Of these participants, 41.5% were female and 58.5% were male. The participants' age distribution was 78% between 18–29 years, 19.5% between 30–39 years, and 2.5% between 40–49 years. Four of the participants self-reported a high familiarity with humanoid robots, two participants stated a high familiarity with negotiation games, and one participant self-reported a high familiarity with both humanoid robots and negotiation games. §.§ Experiment Design The experiment implements the coin entrustment game, and the participants play a total of 16 rounds, where each round consists of two stages. During the first stage, both the participant and the robot secretly entrust between one and ten coins to the other player. For the robot, the number of entrusted coins follows the design of the experiment conducted in <cit.>. Specifically, the number of entrusted coins depends on the payoff from the previous round and at least one coin is entrusted. The number of entrusted coins is expressed as follows: E(p) = {[ ⌈ min(10 + p-10/1.5, 10) ⌉ if p > 0,; 1 if p <= 0, ]. where p is the payoff of the previous round and the initial entrustment in the first round is three coins. In the second stage, the number of entrusted coins is revealed, and both players decide whether to keep or return the coins entrusted to them. If the entrusted coins are returned, the other player receives double the amount of the entrustment. If the player instead decides to keep the coins, these coins are added to their coins. To analyze the effect of an emotional robot on trust and cooperation, the robot always returns the entrusted coins, except for round eight. The robot's cooperative strategy was chosen since research suggests that a strict strategy can hinder cooperation <cit.> and that more generous game-play encourages cooperation <cit.>. After round eight, trust and cooperation with the robot has to be re-established. The robot encourages regaining trust by acknowledging the trust violation, “Perhaps I tried too hard to maximize my coins. I should not do that again.” Hereby, the robot attempts to restore the broken trust. However, the statement is deliberately left open as promises have shown a strong effect on trust repair <cit.>. To encourage cooperative game-play, the participants are unaware of the number of rounds to be played. Further, the participants are instructed to score the highest number of coins out of all participants, rather than competing against the robot. As an incentive for participants, the number of obtained coins was anonymously placed on a public leader board. §.§ Experiment Setup To avoid recognition of the study objective, the experiment is presented as a carnival attraction. The general setup is illustrated in Figure <ref>, and the interaction with the robot during the experiment is shown in Figure <ref>. For a detailed illustration of the experiment setup, a schematic overview is provided in Figure <ref>. During the experiment, the participant is seated in front of a table and faces the robot. Both players (Participant and Robot) use a controller (Controller 1 and Controller 2) to input their decisions during the game. Each controller has two rows of light-up buttons. The first row allows the participant to select the number of coins to entrust by pressing one of the buttons labeled from 1 to 10. Similarly, with the second row of buttons, the participant can decide whether to keep or return the other player's entrustment. During both game stages, the robot utilizes a button-press animation, where the robot moves its hand close to the buttons and focuses its gaze on the controller. Afterward, the lighted button of the robot's controller provides visual confirmation to the participant about the robot’s decision. To block the players' view of the other player's controller during the decision-making of both game stages, a motorized divider is used. Additionally, the stages of each round are clearly separated by a narrator's voice that announces the decisions of each player. The difference between the narrator and the robot is emphasized by utilising two spatially separated loudspeakers (Speaker 1 and Speaker 2). The first loudspeaker is located close to the experiment operator and is used by the narrator for game-related information and announcements. The second loudspeaker is used for the robot's voice and is located inside the robot's body frame. Before the experiment, a pilot study with eight participants was conducted. The data evaluation of the pilot study indicated that the participants understood the provided instructions and chose cooperative strategies instead of exploiting the robot's cooperative behavior. Further, it was suggested that the participants noticed the difference in emotions between both robots. §.§ Emotional Expressions In the experiment, the Neuro-Inspired COmpanion (NICO) <cit.> robot was used, which was designed to combine neurorobotics with human-robot interaction. The software for the experiment is implemented in the Robot Operating System (ROS) <cit.> and is composed of multiple modules to control the robots' interactions during the game, speech, gestures, and facial expressions. §.§.§ Speech The speech generation module utilizes Google Text-to-Speech[<https://github.com/pndurette/gTTS>] (gTTS) to synthesize speech from text for the robot's voice. Spoken sentences comprise instructions for the beginning of the game, the game-play, and the final interaction. For the robot's responses after each game round, sentence variations are included to maintain the perceived robot's animacy. Depending on the participant's choice, 18 unique sentences for defect and 38 sentences for cooperation are implemented. To differentiate between the robots, the emotional robot uses interjections to express emotions <cit.>. When the participant returns the coins, the emotional robot might say: “Hooray! You gave the coins back”, while the non-emotional robot limits the answer to: “You gave the coins back”. In contrast, when the participant keeps the coins, the answer “I see you kept the coins” for the non-emotional robot, is adjusted to “Owww, I see you kept the coins” for the emotional robot. The emotional interjections have specific prosody to convey either happy emotions when the participant cooperates, or sad emotions in the case of defect. The intonation of these interjections was simulated by applying a transfer learning text-to-speech model <cit.> to generate the utterances from audio references expressing either happiness or sadness. Afterward, the generated utterances were post-processed for artifact filtering, pitch, and tempo to match the robot's voice. For the experiment, six different happiness-conveying interjections and five sadness-conveying interjections are utilized. §.§.§ Gestures and Facial Expressions The robot’s gestures are used to increase liveliness and display emotional behavior. For the non-emotional robot, three neutral gestures (looking in a direction, pointing, and a hand gesture) are implemented <cit.>. The happy and sad gestures of the emotional robot are designed to express emotions depending on the participant's decision to cooperate or defect. For instance, gestures that include lowered arms and head are used to express sadness, while opening the arms is associated with happiness <cit.>. For each emotion, three different happy and sad gestures are implemented and randomly displayed after each round. In addition to the emotion-conveying gestures and interjections, the robot possesses LED arrays under its translucent face-plate in the mouth and eye area, which allows for displaying seven universal emotions <cit.>. Depending on the participant's decision to cooperate or defect, the robot shows either a happy or sad facial expression. During the remainder of the game and in the non-emotional group, a neutral facial expression is shown. The facial expressions of the NICO robot are depicted in Figure <ref>. §.§ Questionnaires and Measurements For the evaluation of an emotional and non-emotional robot, the Godspeed <cit.> and Discrete Emotions <cit.> questionnaires are assessed, and the participants' entrusted coins and decisions to cooperate or defect are recorded. The Godspeed questionnaire provides standardized metrics about the perceived robot's anthropomorphism, animacy, likeability, intelligence, and safety. Since the conducted experiment does not pose safety concerns, the category of perceived safety was omitted. The Discrete Emotions questionnaire provides insight into the participants' emotions during the experiment. Specifically, the questionnaire measures eight discrete emotions: anger, anxiety, desire, disgust, fear, happiness, relaxation, and sadness. In each game round, the number of entrusted coins is a direct measure of the participant's trust in the robot. Likewise, the decision to keep or return the robot's coins is a direct measure of cooperation. §.§ Study Design and Procedure The experiment is conducted in a between-subject design with two groups. The robot in the emotional group shows happy and sad emotions through facial expressions, gestures, and utterances. These displays of emotion are shown at the end of each round. The robot in the non-emotional group also demonstrates facial expressions, speech, and gestures. However, the non-emotional robot's gestures are replaced with neutral gestures and a neutral facial expression is used. The participants signed a consent form agreeing to participate in the experiment, and were randomly assigned to one of the two experimental conditions. After the participant's demographics are assessed, the participant is handed a scenario description with an introduction to the experiment. Then, the participant is brought to a neighboring room where the experiment is conducted and seated in front of the robot. An introduction to the controller is provided, and a trial round introduces the experiment procedure. Afterward, the experiment begins. After the experiment, the participant is guided back to the initial room and the questionnaires are assessed. § RESULTS An illustration of the Godspeed and Discrete Emotions questionnaire items with standard errors is shown in Figure <ref>. A Student's t-test <cit.> of the Godspeed questionnaire items shows a significant difference in animacy (p = .043) of the emotional robot (M = 3.18, SD = 0.71) in contrast to the non-emotional robot (M = 2.66, SD = 0.84). Regarding the participants' emotions, the Student's t-test suggests a significant difference in the anxiety item (p = .001) between the emotional group (M = 1.95, SD = 0.70) and non-emotional group (M = 1.44, SD = 0.46). Over the course of the experiment, the participants in both groups entrusted coins to the robot and decided to cooperate or defect. The on-average entrusted coins and cooperation rate with standard errors are shown in Figure <ref>. A Mann–Whitney U test <cit.> shows that the non-emotional group (M = 6.88, SD = 2.94) entrusted significantly more coins to the robot over the course of the experiment (p = .015) than the emotional group (M = 6.38, SD = 2.85). For the cooperation rate, a Fisher's exact test <cit.> shows that the non-emotional group (M = 0.82, SD = 0.39) was significantly more likely to cooperate (p = .001) than the emotional group (M = 0.71, SD = 0.45). Analyzing the difference of the average entrustment for both groups over two consecutive rounds, a Mann–Whitney U test suggests that the emotional group entrusted significantly more coins (p = .038) in the third round (M = 6.65, SD = 1.98) in comparison to the second round (M = 5.40, SD = 1.90). In the ninth round, after the robot defected, a noticeable decrease in the average entrustment can be noticed. Especially, the average entrustment of the non-emotional group significantly (p = .039) decreased from the eighth round (M = 7.95, SD = 3.01) to the ninth round (M = 6.00, SD = 3.39). For the cooperation rate, a Fisher's exact test shows a significant difference (p = .045) between the emotional group (M = 0.70, SD = 0.46) and non-emotional group (M = 0.95, SD = 0.21) in round 15, and a difference in cooperation for the non-emotional group (p = .020) between round 15 (M = 0.95, SD = 0.21) and round 16 (M = 0.62, SD = 0.49). Although the robot's defect noticeably reduced the entrustment in the ninth round, the cooperation rate remained unaffected. Further, the participants exhibited a variety of entrustment strategies as indicated by the standard error. To estimate the relationship between the assessed items of the Godspeed and Discrete Emotions questionnaire, the Spearman correlation <cit.> of the on-average entrusted coins and cooperation rate is estimated and shown in Table <ref>. The estimated correlations show a relationship between the entrusted coins and the perceived intelligence of the robot, where a robot that is perceived as more intelligent receives a larger entrustment. On the contrary, high positive anticipation as measured with the Desire item results in fewer entrusted coins and can decrease the likelihood of cooperating. The amount of coins entrusted to the robot exhibits a strong correlation with the likelihood of cooperation. 0.9 § DISCUSSION The assessed Godspeed and Discrete Emotions questionnaires show significant differences between both experiment groups. Although both robots utilize gestures, facial expressions, and voice, the emotional robot was perceived as more animated than the non-emotional robot. The robot's emotional reaction to the participant's choice to either cooperate or defect, might have led the participants to perceive the emotional robot as more lively and animated. Further, the participants in the emotional group experienced stronger emotions toward the emotional robot. Specifically, participants in the emotional experiment group exhibited a greater level of anxiety than the non-emotional group. Evidently, the display of human-like emotions by the robot caused the participants more discomfort in the researched scenario. In the social dilemma that requires cooperation and reliance on the other player, participants' negative emotions reduced trust and cooperation. This indicates that the participants associated a more human-like behavior with a higher unpredictability of the robot's strategy, whereas a more neutral robot could be associated with pre-programmed and predictable actions to return the participant's entrustment. Likewise, the display of human emotions encouraged a more competitive game-play, as shown by the lower cooperation rate with the emotional robot. The analysis of changes in trust and cooperation over two consecutive rounds shows that both experiment groups behave similarly and that the robot defecting results in a significant trust loss for the non-emotional group. However, the cooperation rate remains unaffected. This might be attributed to the robot's trust repair attempt by acknowledging the trust violation. Afterward, the robot continues to cooperate, which rebuilds the participants' trust. The correlation analysis of the entrusted coins and the measured questionnaire items shows that the participants are likely to entrust more coins to a robot that is perceived as intelligent. This suggests that an as intelligent perceived robot could positively influence the establishment of trust and willingness to cooperate. The negative correlation between the Desire item of the Discrete Emotions questionnaire and the entrusted coins indicates that a participant's desire to succeed can lead to a more cautious entrustment. Since this task requires cooperation, a strong desire to succeed might result in smaller entrustments to assess the partner's willingness to cooperate. For the cooperation rate with the robot, the negative correlation of the desire to outcompete the other player could indicate that the participants initially establish cooperation with the robot, which results in larger entrustments from the robot, with the intent to keep the robot's coins if the amount appears profitable. These results underline that emotions directly affect trust and cooperation in the coin entrustment game, and suggest that negative emotions towards the robot might hinder the formation of trust and cooperation. Contrary to the desire to accumulate coins by defecting the robot, the amount of coins entrusted to the robot exhibits a strong correlation with cooperation. Since the coins represent trust, it appears plausible that a larger entrustment reflects the participant's willingness to cooperate. However, the correlation is symmetrical and does not show that trust is independent of cooperation. The willingness to cooperate could affect the extent of trust in the robot. § LIMITATIONS Despite the difference in trust and cooperation between an emotional and non-emotional robot, the findings have some limitations that should be addressed in future work. The sample size of the experiment could be increased, which might lead to the identification of additional effects and emotions that influence trust and willingness to cooperate with a robot in a social dilemma. Personality traits and trust disposition should be considered to estimate the effect and interplay between emotions and trust more accurately. Additionally, the experiment was conducted in a laboratory environment, and the effects on trust and cooperation could be more pronounced in a real-world environment. Likewise, a more realistic-looking humanoid robot could have a stronger effect. § CONCLUSION This study investigated the effects of an emotional robot on trust and cooperation in the coin entrustment game. Specifically, a robot that conveys emotions by utilizing prosodic speech interjections, facial expressions, and body gestures was compared to a robot that portrays neutral emotions. The robot was programmed to cooperate throughout the experiment, except for the experiment's midpoint, to encourage the participants' trust and cooperation. The participants' entrusted coins, cooperation, perception of the robot, and emotions during the experiment were evaluated. The results show that the participants experienced more anxiety during the interaction with the emotional robot. Accordingly, the emotional robot received less trust and participants were less likely to cooperate. The perceived robot's intelligence affected trust, and robots that exhibit emotions might be perceived as less suitable and competent for a cooperative task. However, emotional robots could be more resilient to breaches of trust, which points towards differences in interaction with and perception of emotional robots. These differences might be based on humans' preconceived assumptions about robots and how they should operate. Furthermore, the results are consistent with findings in the literature that suggest that trust in anthropomorphic robots is task-dependent, and provide evidence that depending on the task, trust and cooperation with a neutral robot can be higher than with a robot that displays human-like emotions. Subsequent experiments could research the specific effects of robots displaying either negative or positive emotions on trust and cooperation. Further, the granularity of the displayed emotions could be considered, to provide a better understanding of the impact of emotions in human-robot interaction. Consequently, leading to guidelines and insights on applications for which emotional robots could be beneficial or have adverse effects. This research underlines the importance of social interaction between robots and humans, where an anthropomorphic robot can reduce trust. Especially, for critical or safety-related tasks, where cooperation is essential and mistakes can result in consequences, the relationship between emotions and the effect on trust and cooperation has to be considered. ieeetr
http://arxiv.org/abs/2307.02796v1
20230706061151
VerifAI: Verified Generative AI
[ "Nan Tang", "Chenyu Yang", "Ju Fan", "Lei Cao" ]
cs.DB
[ "cs.DB", "cs.CL", "cs.LG" ]
authorsperrow=4 exampleExample shapes,snakes calc
http://arxiv.org/abs/2307.00792v1
20230703071928
Anelasticity to plasticity transition in a model two-dimensional amorphous solid
[ "Baoshuang Shang" ]
cond-mat.dis-nn
[ "cond-mat.dis-nn", "cond-mat.mtrl-sci" ]
[email protected] Songshan Lake Materials Laboratory, Dongguan 523808, China Anelasticity, as an intrinsic property of amorphous solids, plays a significant role in understanding their relaxation and deformation mechanism. However, due to the lack of long-range order in amorphous solids, the structural origin of anelasticity and its distinction from plasticity remain elusive. In this work, we study the transition from anelasticity to plasticity transition in a two-dimensional model glass. Three distinct mechanical behaviours, namely elasticity, anelasticity, and plasticity, are identified with control parameters in the amorphous solid. Through the study of finite size effects on these mechanical behaviors, it is revealed that anelasticity can be distinguished from plasticity. Anelasticity serves as an intrinsic bridge connecting the elasticity and plasticity of amorphous solids. Additionally, it is observed that anelastic events are localized, while plastic events are subextensive. The transition from anelasticity to plasticity is found to resemble the entanglement of long-range interactions between element excitations. This study sheds light on the fundamental nature of anelasticity as a key property of element excitations in amorphous solids. Anelasticity to plasticity transition in a model two-dimensional amorphous solid Baoshuang Shang August 1, 2023 ================================================================================ § INTRODUCTION Amorphous solids, as non-equilibrium materials, have captivated the interest of both academic researchers and industrial applications due to their diverse deformation behaviors <cit.>. However, unlike crystalline materials, the underlying deformation mechanism in amorphous solids remains a topic of debate. While crystalline materials exhibit topological defects such as dislocations or grain boundaries as excitation elements, amorphous solids rely on shear transformations or rearrangement events rather than specific topological structures<cit.>. Previous studies<cit.> have observed that rearrangement events in amorphous solids are localized and occur within the apparent elastic regime. However, considering plastic rearrangements as excitation elements is inappropriate due to their entanglement with long-range elastic interactions and plastic events<cit.>. Consequently, identifying the excitation element in amorphous solids remains a topic of ongoing debate<cit.>. Furthermore, it is crucial to note that not all rearrangement events in amorphous solids exhibit irreversible plastic behavior. Recently, a new type of rearrangement event characterized by an intrinsic anelastic nature has garnered significant attention<cit.>. Unlike plastic events, these anelastic rearrangements are reversible during the loading-unloading process. They play a vital role in influencing various mechanical properties of amorphous solids, including relaxation <cit.>, thermal cycling rejuvenation<cit.>, mechanical anisotropy<cit.> , and memory effect<cit.>. However, the understanding of the discrepancies and connections between anelastic and plastic events, as well as the key parameters governing the transition from anelasticity to plasticity, remain elusive. Further investigations are necessary to unravel the underlying mechanisms and establish a comprehensive understanding of the relationship between these two deformation modes in amorphous solids. In this study, we aim to address these questions through molecular dynamics simulations. We investigate the mechanical response of amorphous solids using athermal quasistatic shear and frozen matrix methods. Our focus is on observing the transition from anelasticity to plasticity and understanding the underlying mechanisms. By analyzing the characterized parameters, we identify three distinct deformation modes: elasticity, anelasticity, and plasticity. Specifically, we explore the effects of finite system size on these parameters. Our findings reveal that anelasticity precedes plasticity and serves as a critical intermediary between elasticity and plasticity. Furthermore, we observe that anelastic events, characterized by their system size independence, exhibit a localized nature within the material. On the other hand, plastic events display subextensive behavior. This suggests that plasticity emerges from the collective interaction of a series of anelastic events. As a result, anelastic events can be regarded as potential element excitations in amorphous solids. § METHOD §.§ Initial sample preparation We used a well studied two dimensional binary Lenard-Jonson model<cit.> to investigate the mechanical property of amorphous solid. All units were expressed in terms of the mass m and the two parameters describing the energy and length scales of interspecies interaction, ϵ and σ and the Boltzmann constant k_B=1, respectively. Therefore, time was measured in units of t_0 = √(mσ^2/ϵ), and temperature was measured in units of T_0=ϵ/k_B. The composition ratio of large (L) and small (S) was N_L : N_S = (1 + √(5))/4. 100 samples each containing 10^4 atoms with periodic boundary condition, were obtained by quenching from T=2.0 to T=0.18 with constant volume, and quench rate was fixed at 0.0000325 T_0/t_0, and then the initial sample was obtained by minimized the quenched sample with conjugate gradient (CG) method, where the effective temperature of the sample is around 0.335. The reduce density of system was fixed at 1.02. All the results were presented in reduced units. We performed all the simulations using the LAMMPS molecular dynamics package<cit.>, and used the OVITO package<cit.> for atomic visualization. §.§ Frozen matrix method Frozen matrix method is a useful tool to investigate the local yield stress<cit.> or local modulus<cit.> of the amorphous system, and local relaxation time<cit.> of supercooled liquid. Here, we extent this method to study the anelasticity to plasticity transition in the amorphous solid. The investigated region was selected within a radius R, then the outside region was frozen with affine deformation (Figure <ref> (a)). The mechanical response of amorphous solid was probed by the athermal quasi-static shear (AQS) protocol The simple shear deformation was performed in all regions and the shear strain increment was δγ=10^-5, during each shear strain increment, the investigated region was relaxed by energy minimization, and the outside region was frozen with affine deformation. The loading process continues until the shear strain reaches to 0.3. §.§ Physical property characterization §.§.§ anelastic event and plastic event During the loading process, stress and potential energy drops occur (Figure <ref> (b),(c)), which are caused by rearrangement events (Figure <ref> (d)). A rearrangement event is defined as when the maximum atomic displacement exceeds 0.1, or the potential energy drop per atom Δ U is greater than 0.1, in the investigated region. To characterize the anelastic and plastic events, the following unloading process was performed. After each potential energy drop, the shear strain was reversed and sheared back to γ = 0, which is called the unloaded sample. The mean squared displacement (MSD) between the unloaded sample and the initial sample was compared, and if MSD was zero, the drop was an anelastic event; otherwise, it was a plastic event. The potential energy per atom U and the shear stress τ of the investigated region were monitored during the loading and unloading process, and the shear strain increment was fixed at δγ=10^-5. §.§.§ atomic displacement To compare configuration 1 and configuration 2, the atomic displacement of atom i is defined as d⃗_⃗i⃗=r⃗_⃗i⃗(1)-r⃗_⃗i⃗(2), where r⃗_⃗i⃗(1) is the coordination vector of atomic i at configuration 1. The MSD between two configuration can be defined as ∑_i ∈ Nd⃗_⃗i⃗·d⃗_⃗i⃗, where N is the atomic number in the investigation region. § RESULT AND DISCUSSION Figure <ref> shows the loading process of the amorphous solid with various system sizes. There are notable finite size effects with frozen matrix boundary condition, both the density of stress drop Δτ and potential energy drop per atom Δ U increases with system size R. Conversely, the magnitude of Δτ and Δ U decreases with system size (Figure <ref> f,g). During the stress drop (Figure <ref> e), the atomic displacement d_i shows a typical quadratic symmetry. This is qualitatively consistent with the situation with periodic boundary condition <cit.>. However, the frozen matrix boundary condition not only blocks the long range elastic interaction from outside region, but also causes significant confinement effect<cit.>. As suggested by Regev et al <cit.>, the confinement effect can lead to reversible rearrangement. As shown in Figure <ref> , for R=6 system, both the stress and potential energy state are fully recovered after the loading-unloading process, indicating an anelastic behavior. In contrast, for R=26 system, both the stress and potential energy states are different from the initial state, indicating a plastic behavior. Interestingly, for R=11 system, the stress state is almost recovered but the energy state is not. This behavior can be attributed to the confinement effect of the frozen matrix boundary, and it signifies a transition from anelasticity to plasticity controlled by the system size. Increasing the system size weakens the confinement effect, thereby influencing the nature of the deformation response. Moreover, it should be noted that for a given system size, the increase in strain can also lead to a transition from anelasticity to plasticity. This behavior is depicted in Figure <ref>, which illustrates the loading process with various unloading processes for a system size of R = 11. When the unloading process starts from a strain value of γ = 0.14, the potential energy of the system can be fully recovered to its initial state. However, when the unloading process starts from γ = 0.15 or γ = 0.19, the potential energy of the unloaded samples (Back-1, Back-2) is higher than the initial state (Figure <ref>(a)). The loading process exhibits a characteristic first drop strain, denoted as γ_e = 0.02545 (Figure 3(b)). When the loading strain γ is smaller than γ_e, the loading-unloading process can be fully recovered. This behavior is indicative of elasticity, where no dissipation (∮τ dγ≡ 0) occurs during the loading phase. In addition, when the loading strain exceeds γ_e (γ > γ_e), the potential energy-strain curve exhibits sudden drops, indicating atomic rearrangement events. These energy drops are further examined through the unloading process. The last energy drop corresponding to an anelastic event can be identified (Figure <ref>(c)). The loading process exhibits a last anelastic drop strain denoted as γ_p = 0.145. For loading strains between γ_e and γ_p, the loading-unloading process can be fully recovered, but dissipation occurs. When the loading strain exceeds γ_p (γ > γ_p), the unloaded sample cannot be fully recovered to its initial state, and plastic rearrangement takes place. The displacement color map of the anelastic drop is presented in Figure <ref>(d). A comparison of the atomic displacement during the anelastic drop with the plastic displacement of Back-1 and Back-2 samples reveals that the atomic displacement during the anelastic drop is reversible, while plastic rearrangement results in permanent displacement. By comparing the energy of the unloaded sample with the initial state, three types of mechanical property during the loading process can be identified: elasticity, anelasticity, and plasticity (Figure <ref>(e)). The anelasticity to plasticity transition is both controlled by system size R and loading strain γ, and the key parameters for the transition are γ_e and γ_p. Figure <ref> (a) shows the mean value of γ_e and γ_p decreases with system size R, and the finite size effect can be well depicted by a powerlaw formula γ∼ R^-α, for γ_e ∼ R^-1.09 ± 0.02 and γ_p ∼ R^-0.86 ± 0.04, respectively. The discrepancy of exponent between γ_e and γ_p reveals as the system size increase, the anelastic property will be more significant, it dominates the deformation of apparent elastic region. Furthermore, for thermodynamic limit R →∞ , both of elasticity and anelasticity will be disappeared, and the intrinsic mechanical property of amorphous solid is inelasticity, this is consistent with the observation of avalanche statistics in the apparent elastic region<cit.>. It confirms that regardless of boundary condition, the nature of amorphous solid is inelastic. The finite size exponent can determent the property of avalanche statistic, and the avalanche size is defined as N Δ U. The γ_p is the boundary of anelasticity and plasticity, hence the avalanche happens during loading process between γ_e and γ_p can be recognized as anelastic avalanche, and otherwise is plastic avalanche, the statistic property of the avalanche size is shown in Figure <ref> (b), the plastic avalanche increases with system size, but in contrast the anelastic avalanche doesn't change with system size. It reveals the anelastic avalanche is localized event, which is distinguished with the sub-extensitive nature of plastic avalanche<cit.>. The anelastic event can be recognized as the basin hopping within a metabasin based on the view of potential energy landscape<cit.>, and the accumulation of basin hopping would arouse metabasin hopping, it means the plastic event can be only triggered when the strain is larger than γ_p. Therefore, the anelasticity can be identified as the activation of element excitation, such as STZ<cit.> or fluid units<cit.>, and the sub-extensitive plasticity is composed of the element excitation, and entangled with long range elasticity. § CONCLUSION In summary, our study focused on understanding the transition from anelasticity to plasticity in amorphous solids. We employed molecular dynamics simulations using athermal quasistatic shear and frozen matrix methods. By analyzing various parameters, we identified three distinct deformation modes: elasticity, anelasticity, and plasticity. We found that anelasticity acts as a critical intermediary between elasticity and plasticity, and it precedes the onset of plastic deformation. The transition from anelasticity to plasticity is influenced by both the system size (R) and the applied loading strain (γ). We characterized this transition using key parameters γ_e and γ_p. The transition from anelasticity to plasticity occurs when the loading strain exceeds γ_p. Anelastic events were found to exhibit localized behavior, while plastic events exhibited subextensive behavior. Our findings suggest that plasticity arises from the interaction of a series of anelastic events, with anelastic events acting as potential element excitations in amorphous solids. The transition from anelasticity to plasticity is influenced by system size and loading strain, and the properties of avalanche statistics depend on the system size and the distinction between anelastic and plastic avalanches. § ACKNOWLEDGEMENTS This work is supported by Guangdong Major Project of Basic and Applied Basic Research, China (Grant No.2019B030302010), Guangdong Basic and Applied Basic Research, China (Grant No.2021B1515140005), Pearl River Talent Recruitment Program (Grant No.2021QN02C04), the NSF of China (Grant No.52130108). unsrt 10 Schuh20074067 Christopher A. Schuh, Todd C. Hufnagel, and Upadrasta Ramamurty. Mechanical behavior of amorphous alloys. Acta Materialia, 55(12):4067 – 4109, 2007. Wang2012a Wei Hua Wang. The elastic properties, elastic models and elastic perspectives of metallic glasses. Progress in Materials Science, 57(3):487 – 656, 2012. hufnagel2016deformation Todd C Hufnagel, Christopher A Schuh, and Michael L Falk. Deformation of metallic glasses: Recent developments in theory, simulations, and experiments. Acta Materialia, 2016. Cheng2011379 Y.Q. Cheng and E. Ma. Atomic-level structure and structure–property relationship in metallic glasses. Progress in Materials Science, 56(4):379 – 473, 2011. barrat2011heterogeneities Jean-Louis Barrat and Anaël Lemaître. Heterogeneities in amorphous systems under shear. Dynamical Heterogeneities in Glasses, Colloids, and Granular Media, 150:264, 2011. RevModPhys.90.045006 Alexandre Nicolas, Ezequiel E. Ferrero, Kirsten Martens, and Jean-Louis Barrat. Deformation and flow of amorphous solids: Insights from elastoplastic models. Rev. Mod. Phys., 90:045006, Dec 2018. argon1979plastic AS Argon. Plastic deformation in metallic glasses. Acta metallurgica, 27(1):47–58, 1979. PhysRevLett.95.095502 Yunfeng Shi and Michael L. Falk. Strain localization and percolation of stable structure in amorphous solids. Phys. Rev. Lett., 95:095502, 2005. schall2007structural Peter Schall, David A Weitz, and Frans Spaepen. Structural rearrangements that govern flow in colloidal glasses. Science, 318(5858):1895–1899, 2007. PhysRevE.82.055103 Smarajit Karmakar, Edan Lerner, and Itamar Procaccia. Statistical physics of the yielding transition in amorphous solids. Phys. Rev. E, 82:055103, 2010. PhysRevE.96.033002 Jie Lin and Wen Zheng. Universal scaling of the stress-strain curve in amorphous solids. Phys. Rev. E, 96:033002, Sep 2017. PhysRevE.79.066109 Edan Lerner and Itamar Procaccia. Locality and nonlocality in elastoplastic responses of amorphous solids. Phys. Rev. E, 79:066109, 2009. PhysRevLett.93.016001 Craig Maloney and Anaël Lemaître. Subextensive scaling in the athermal, quasistatic limit of amorphous matter in plastic shear flow. Phys. Rev. Lett., 93:016001, 2004. krisponeit2014crossover Jon-Olaf Krisponeit, Sebastian Pitikaris, Karina E Avila, Stefan Küchemann, Antje Krüger, and Konrad Samwer. Crossover from random three-dimensional avalanches to correlated nano shear bands in metallic glasses. Nature Communications, 5, 2014. PhysRevLett.112.155501 James Antonaglia, Wendelin J. Wright, Xiaojun Gu, Rachel R. Byer, Todd C. Hufnagel, Michael LeBlanc, Jonathan T. Uhl, and Karin A. Dahmen. Bulk metallic glasses deform via slip avalanches. Phys. Rev. Lett., 112:155501, 2014. Lagogianni2018 Alexandra E. Lagogianni, Chen Liu, Kirsten Martens, and Konrad Samwer. Plastic avalanches in the so-called elastic regime of metallic glasses. The European Physical Journal B, 91(6), 2018. Shang2019 Baoshuang Shang, Jörg Rottler, Pengfei Guan, and Jean-Louis Barrat. Local versus global stretched mechanical response in a supercooled liquid near the glass transition. Phys. Rev. Lett., 122:105501, 2019. PhysRevMaterials.7.013601 J. Duan, Y. J. Wang, L. H. Dai, and M. Q. Jiang. Elastic interactions of plastic events in strained amorphous solids before yield. Phys. Rev. Mater., 7:013601, Jan 2023. argon2013strain AS Argon. Strain avalanches in plasticity. Philosophical Magazine, 93(28-30):3795–3808, 2013. xu2017strain Bin Xu, Michael Falk, Jinfu Li, and Lingti Kong. Strain-dependent activation energy of shear transformation in metallic glasses. Phys. Rev. B, 95:144201, 2017. PhysRevMaterials.4.113609 D. Richard, M. Ozawa, S. Patinet, E. Stanifer, B. Shang, S. A. Ridout, B. Xu, G. Zhang, P. K. Morse, J.-L. Barrat, L. Berthier, M. L. Falk, P. Guan, A. J. Liu, K. Martens, S. Sastry, D. Vandembroucq, E. Lerner, and M. L. Manning. Predicting plasticity in disordered solids from structural indicators. Phys. Rev. Mater., 4:113609, 2020. PhysRevE.102.033006 Firaz Ebrahem, Franz Bamer, and Bernd Markert. Origin of reversible and irreversible atomic-scale rearrangements in a model two-dimensional network glass. Phys. Rev. E, 102:033006, 2020. Regev2015 Ido Regev, John Weber, Charles Reichhardt, Karin A. Dahmen, and Turab Lookman. Reversibility and criticality in amorphous solids. Nature Communications, 6(1), November 2015. PhysRevLett.99.135502 John S. Harmon, Marios D. Demetriou, William L. Johnson, and Konrad Samwer. Anelastic to plastic transition in metallic glass-forming liquids. Phys. Rev. Lett., 99:135502, 2007. Wang2019 Wei Hua Wang. Dynamic relaxations and relaxation-property relationships in metallic glasses. Progress in Materials Science, 106:100561, 2019. Costa2022anelastic Miguel B. Costa, Juan J. Londoño, Andreas Blatter, Avinash Hariharan, Annett Gebert, Michael A. Carpenter, and A. Lindsay Greer. Anelastic-like nature of the rejuvenation of metallic glasses by cryogenic thermal cycling. Acta Materialia, page 118551, 2022. PhysRevB.48.3048 T. Tomida and T. Egami. Molecular-dynamics study of structural anisotropy and anelasticity in metallic glasses. Phys. Rev. B, 48:3048–3057, 1993. Shang2022cycle Baoshuang Shang, Weihua Wang, and Pengfei Guan. Cycle deformation enabled controllable mechanical polarity of bulk metallic glasses. Acta Materialia, 225:117557, 2022. PhysRevE.88.062401 Ido Regev, Turab Lookman, and Charles Reichhardt. Onset of irreversibility and chaos in amorphous solids under periodic shear. Phys. Rev. E, 88:062401, 2013. PhysRevLett.112.025702 Davide Fiocco, Giuseppe Foffi, and Srikanth Sastry. Encoding of memory in sheared amorphous solids. Phys. Rev. Lett., 112:025702, 2014. PhysRevE.97.033001 Armand Barbot, Matthias Lerbinger, Anier Hernandez-Garcia, Reinaldo García-García, Michael L. Falk, Damien Vandembroucq, and Sylvain Patinet. Local yield stress statistics in model amorphous solids. Phys. Rev. E, 97:033001, Mar 2018. Thompson2022 Aidan P. Thompson, H. Metin Aktulga, Richard Berger, Dan S. Bolintineanu, W. Michael Brown, Paul S. Crozier, Pieter J. in 't Veld, Axel Kohlmeyer, Stan G. Moore, Trung Dac Nguyen, Ray Shan, Mark J. Stevens, Julien Tranchida, Christian Trott, and Steven J. Plimpton. LAMMPS - a flexible simulation tool for particle-based materials modeling at the atomic, meso, and continuum scales. Computer Physics Communications, 271:108171, February 2022. Stukowski2009VisualizationAA Alexander Stukowski. Visualization and analysis of atomistic simulation data with ovito–the open visualization tool. Modelling and Simulation in Materials Science and Engineering, 18:015012, 2009. PhysRevLett.117.045501 Sylvain Patinet, Damien Vandembroucq, and Michael L. Falk. Connecting local yield stresses with plastic activity in amorphous solids. Phys. Rev. Lett., 117:045501, 2016. PhysRevE.87.042306 Hideyuki Mizuno, Stefano Mossa, and Jean-Louis Barrat. Measuring spatial distribution of the local elastic modulus in glasses. Phys. Rev. E, 87:042306, 2013. lemaitre2006sum Anaël Lemaître and Craig Maloney. Sum rules for the quasi-static and visco-elastic response of disordered solids at zero temperature. Journal of statistical physics, 123(2):415–453, 2006. PhysRevE.98.033002 Markus Blank-Burian and Andreas Heuer. Shearing small glass-forming systems: A potential energy landscape perspective. Phys. Rev. E, 98:033002, 2018. Wang2018 Zheng Wang and Wei-Hua Wang. Flow units as dynamic defects in metallic glassy materials. National Science Review, 6(2):304–323, 2018.
http://arxiv.org/abs/2307.03091v1
20230706160527
NANOGrav meets Hot New Early Dark Energy and the origin of neutrino mass
[ "Juan S. Cruz", "Florian Niedermann", "Martin S. Sloth" ]
astro-ph.CO
[ "astro-ph.CO", "hep-ph", "hep-th" ]
apsrev4-2 ∇̅     Phys. Lett. Phys. Rev. Lett. Nucl. Phys. Phys. Rev. Mod. Phys. Lett. Int. J. Mod .Phys.=cmr6 at 10truept equationsection Phys. Lett. Phys. Rev. Lett. Nucl. Phys. Phys. Rev. Mod. Phys. Lett. Int. J. Mod .Phys.=cmr6 at 10truept
http://arxiv.org/abs/2307.01461v1
20230704034742
Approximating Quantum Lyapunov Exponents in Quantum Kicked Rotor
[ "Varsha Gupta" ]
quant-ph
[ "quant-ph" ]
Purdue University, West Lafayette, Indiana, United States conditions* >l<@= >X 1empty In this work, we study quantum chaos by focusing on the evolution of initially close states in the dynamics of the Quantum Kicked Rotor (QKR). We propose a novel measure, the Quantum Lyapunov Exponent (QLE), to quantify the degree of chaos in this quantum system, analogous to its classical counterpart. We begin by modeling the momentum space and then the QLE is computed through analyzing the fidelity between evolving states, offering insights into the quantum chaotic behavior. Furthermore, we extend our investigations to various initial states: localized, uniform, spreading, contracting and oscillating in momentum space. Our results unveil a diverse range of dynamical behaviors, highlighting the complex nature of quantum chaos. Finally, we propose an innovative optimization framework to represent a complex state as a superposition of the aforementioned states, which has potential implications for visualizing and understanding the dynamics of multifaceted quantum systems. Approximating Quantum Lyapunov Exponents in Quantum Kicked Rotor Varsha Guptamailto:[email protected] Department of Mathematics, Southern University of Science and Technology, Shenzhen, China E-mail: [email protected] ============================================================================================================================================ plain § INTRODUCTION In the last few years, quantum chaos, the quantum counterpart to classical chaos, has been the subject of extensive research due to its rich underlying dynamics and wide-ranging applications. The notion of chaos, intrinsic to various physical systems, dictates how small changes in initial conditions can lead to dramatic variations in the system's future state <cit.>. Moreover, this sensitivity to the initial state causes the predictability in the bahaviour of the system akin to stochastic systems, a feature commonly attributed to chaos <cit.>. Therefore, a measure of chaos is crucial in understanding the time evolution of states. Albeit classical systems demonstrating chaotic behavior are relatively well understood, translating these concepts to the quantum realm has posed significant challenges <cit.>. One of the key attributes of classical chaos is sensitivity to initial conditions, which is characterized by the Lyapunov exponent in classical systems <cit.>. In quantum systems, this sensitivity is not directly observable due to the unitary evolution of quantum states, which preserves the Hilbert space norm <cit.>. Therefore, alternative measures have been sought to characterize quantum chaos. These measures range from energy level statistics <cit.>, out-of-time-ordered correlators <cit.>, to quantum entanglement <cit.>, among others. Among various models used to study quantum chaos, the Quantum Kicked Rotor (QKR) has proven to be a paradigmatic one. Introduced as a quantum mechanical analogue of the classical kicked rotor, QKR is known for exhibiting both quantum and classical chaos <cit.>. In this work, we consider the QKR as our model system and aim to examine the evolution of initially close quantum states in its dynamics. As a measure of chaos in this quantum system, we propose the Quantum Lyapunov Exponent (QLE), inspired by its classical counterpart. We calculate the QLE based on the evolution of states under QKR and the subsequent calculation of fidelity between these states. Fidelity, or overlap, is a well-established measure to quantify how two states evolve in time <cit.>. We further extend our analysis to different states, namely localized, uniform, spreading, contracting and oscillating in momentum space. Finally, we propose an innovative optimization framework to represent complex states as a weighted combination of simple states. This approach can potentially aid in visualizing and understanding the dynamics of complex quantum systems. § MOMENTUM SPACE MODELING Let's consider a Quantum Kicked Rotor (QKR) to study quantum chaos. We'll start by constructing a mathematical model for the QKR in momentum space. It's known that the time evolution operator for the QKR in position space is provided by the following equation: U(T) = exp(-i/ħ·p^2T/2I) exp(-i/ħ· K cos(θ)). Similarly, the evolution operator in the momentum representation is then given by the following expression: U(T) = exp(-i/2Iħ· p^2T) exp(-iK cos(-iħ∂/∂ p)). We know that the second exponential term in the evolution operator is a tricky one. It's essentially a quantum Bessel function operator. Here, we simplify this expression using the relationship between Bessel functions and the exponential function: U(T) = exp(-i/2Iħ· p^2T) ∑_n=-∞^∞ i^n J_n(K) exp(-i/ħnp), where J_n(K) are the Bessel functions of the first kind. The wave function in momentum space ψ(p) evolves according to the following equation: |ψ(p, nT)⟩ = U^n(T) |ψ(p, 0)⟩, where U^n(T) denotes the nth power of the evolution operator and nT is the time after n kicks. As ħ increases, |ψ(p, nT)⟩ = [ exp(-i/2Iħ· p^2T) ∑_n=-∞^∞ i^n J_n(K) exp(-i/ħnp) ]^n |ψ(p, 0)⟩. We know that for large K, the Bessel function has most of its weight around n=K, so we can approximate the sum over n in the time-evolution operator by an integral and rewrite the above equation as following: ∑_n=-∞^∞ i^n J_n(K) exp(-i/ħnp) ≈∫_-∞^∞ i^n J_n(K) exp(-i/ħnp) dn. For large K, the integral is dominated by the region near the points where the phase of the integrand is stationary (i.e., the derivative of the phase with respect to n is zero). The phase of the integrand in this case is given by nlog(i) - np/ħ + [J_n(K)], where [J_n(K)] denotes the argument of the Bessel function, which we can approximate as π/4 for large K (because J_n(K) behaves like a decaying oscillatory function for large K). Setting the derivative of the phase with respect to n equal to zero gives: log(i) - p/ħ = 0, which gives us the stationary phase point: n_stat = p/ħlog(i) = p/ħπ/2 = 2p/ħπ. Using the method of steepest descent, we can now approximate the integral as following: |ψ(p, nT)⟩≈[ exp(-i/2Iħ· p^2T) i^n_stat J_n_stat(K) exp(-i/ħn_statp) √(2π/|n_stat”|)]^n |ψ(p, 0)⟩. where n_stat” is the second derivative of the phase at the stationary point n_stat. In order to compute the second derivative n_stat” of the phase function from the integral in the method of steepest descents, we need to identify the phase function itself first. In the steepest descents approximation, we have a sum or an integral of the form: ∫_-∞^∞ f(n) e^i g(n) dn, where f(n) is a slowly varying function and g(n) is a rapidly varying function (the phase function). The main contribution to this integral comes from the neighborhood of the points where g'(n) is zero (stationary points). In this case, the phase function seems to be: g(n) = n log n - np. Taking its derivative, we get: g'(n) = log n - p, which gives n_stat = e^p as the stationary point (where the derivative is zero). Therefore, the second derivative at the stationary point n_stat” = g”(n_stat) is equal to 1/e^p. Now substituting it back into the equation for |ψ(p, nT)⟩. |ψ(p, nT)⟩ ≈[ exp(-i/2Iħ· p^2T) i^e^p J_e^p(K) exp(-i/ħe^pp) √(2π e^p)]^n |ψ(p, 0)⟩. Here, J_e^p(K) is the Bessel function evaluated at the point e^p (the stationary point of the phase function), and √(2π e^p) is the Gaussian approximation of the momentum space width. § QLE APPROXIMATION In this section, we aim to provide an approximation of quantum chaos by examining the evolution of two initial states that are closely aligned, using the measure of fidelity. Let's say that we have two wave-functions |ψ_1⟩ and |ψ_2⟩ which are initially close. This closeness can be quantified by their overlap, which we'll assume to be nearly 1: ⟨ψ_1(0) |ψ_2(0)⟩≈ 1. In the momentum space, their momentum wave-functions, ψ_1(p,0) and ψ_2(p,0), are also very close, differing by a small amount, say δ p, in the momentum space: ψ_1(p,0) = ϕ(p), ψ_2(p,0) = ϕ(p + δ p). Here, ϕ(p) is a general function representing the momentum space wave-function. The states evolve under the time evolution operator to give: |ψ_1(p, nT)⟩ = [ exp(-i/2Iħ· p^2T) i^e^p J_e^p(K) exp(-i/ħe^pp) √(2π e^p)]^n |ψ_1(p, 0)⟩, |ψ_2(p, nT)⟩ = [ exp(-i/2Iħ· (p+δ p)^2T) i^e^p+δ p J_e^p+δ p(K) exp(-i/ħe^p+δ p(p+δ p)) √(2π e^p+δ p)]^n |ψ_2(p, 0)⟩. Despite their initial closeness, these states can diverge significantly over time, indicating a transition to chaotic behavior. Next, we calculate the distance between the states in operator space at each time step using the fidelity F(nT) = |⟨ψ_1(nT)|ψ_2(nT)⟩|^2 as shown: F(nT) = | ∫ ψ_1^*(p, nT) ψ_2(p, nT) |^2 dp. Substituting the expressions for |ψ_1(p, nT)⟩ and |ψ_2(p, nT)⟩, we find: F(nT) = | ∫[ exp(-i/2Iħ· p^2T) i^e^p J_e^p(K) exp(-i/ħe^pp) √(2π e^p)]^n ϕ^*(p) . × [ exp(-i/2Iħ· (p+δ p)^2T) i^e^p+δ p J_e^p+δ p(K) exp(-i/ħe^p+δ p(p+δ p)) √(2π e^p+δ p)]^n ϕ(p+δ p) |^2 dp Let's expand (p + δ p)^2 and denote e^p + δ p as e^p' for clarity. Here, p' = p + δ p. Now, (p + δ p)^2 = p^2 + 2pδ p + δ p^2. Substituting these terms, the fidelity can be simplified to: F(nT) = | ∫[ exp(-i/2Iħ· p^2T) i^e^p J_e^p(K) exp(-i/ħe^p p) √(2π e^p)]^n ϕ^*(p) ×[ exp(-i/2Iħ T (p^2 + 2pδ p + δ p^2)) i^e^p' J_e^p'(K) exp(-i/ħe^p' p') √(2π e^p')]^n ϕ(p') |^2 dp. Let's define the following variables: A(p) = √(2π e^p)exp(-i/2Iħ· p^2T) i^e^p J_e^p(K), B(p) = e^p p. and rewrite the F(nT) as following: F(nT) = | ∫[ A(p) exp(i n B(p)) ] ϕ^*(p) ×[ A(p') exp(i n B(p')) exp(-i/2Iħ nT δ p^2 ) exp(-i/2Iħ nT 2p δ p ) ] ϕ(p') |^2 dp. Now, expanding the expression inside the absolute square and separating the integrals over p and p': F(nT) = | ∫ A(p) A(p') exp(i n δ p^2 T/2Iħ) exp(i n δ p Tp/Iħ) ×exp(i n (B(p) - B(p'))) ϕ^*(p) ϕ(p') |^2 dp dp'. As δ p ≈ 0, ∫ϕ^*(p) ϕ(p + δ p) dp ≈ 1. If the wave-functions ϕ(p) are normalized, we can simplify this further to: F(nT) = | exp(i n δ p^2 T/2Iħ) exp(i n δ p Tp_avg/Iħ) ×∫ A(p) , A(p + δ p) exp(i n (B(p) - B(p + δ p))) |^2 dp. where p_avg is some average value of p over the integral range. Taking the natural logarithm of both sides of the equation, we get: ln F(nT) = 2i n δ p^2 T/2Iħ + 2i n δ p Tp_avg/Iħ + 2 ln| ∫ A(p) A(p + δ p) exp(i n (B(p) - B(p + δ p))) dp |. Taking the limit δ p → 0, we get: QLE = -lim_δ p → 01/nTln| ∫ A(p) , A(p + δ p) , exp(i n (B(p) - B(p + δ p))) dp |. If B(p) is a smooth function and does not vary rapidly with p, we could further approximate this bas following: QLE≈ -1/nTln| ∫ |A(p)|^2 dp |. This final expression for the Quantum Lyapunov Exponent, under the assumptions we have made, can be used to quantify the average exponential rate of separation of initially close quantum states in the system, providing a measure of the system's sensitivity to initial conditions. A high QLE can indicate a kind of quantum analog of chaos in terms of information scrambling § IMPLEMENTATION OF QLE APPROXIMATION FOR VARIOUS INITIAL STATES Now, we will apply the above derived approximations for different states and analyze the resultant final expression. §.§ Localized in momentum space For the case where A(p) is localized in momentum space, we suppose that it has a Gaussian form centered around p_0 with width σ and can be represented as following: A(p) ≈exp[-(p - p_0)^2/2σ^2]. Using standard results from Gaussian integrals, we find ∫ |A(p)|^2 dp = √(πσ^2). Substituting this into the QLE formula gives QLE≈ -1/nTln| ∫ |A(p)|^2 dp | = -1/nTln| √(πσ^2)|. Simplifying, we get QLE≈ -1/2nTln( πσ^2 ). In this case, the Quantum Lyapunov Exponent (QLE) depends on the width σ of the Gaussian in momentum space. For a small value of σ when state is more localized, the resultant QLE is small. Conversely, for large σ, QLE is small which is in alignment with the intuition that a less localized state is more sensitive to small perturbation and thus more chaotic. §.§ Uniform in Momentum Space In this case, A(p) is uniform in momentum space and therefore, we would expect A(p) to be a constant for all p within a certain range [-p_0, p_0], and zero otherwise. This can be represented mathematically as: A(p) = C, for -p_0 ≤ p ≤ p_0 0, otherwise Here, C is a normalization constant which we can choose such that the total probability is 1. This leads to C = 1/2p_0. The square of the absolute value |A(p)|^2 is then the same as A(p) in this case, and the integral of |A(p)|^2 over all p becomes: ∫ dp |A(p)|^2 = ∫_-p_0^p_0 dp C = C · 2p_0 = 1. Inserting this into the QLE formula gives: QLE≈ -1/nTln| ∫ |A(p)|^2 dp | = -1/nTln| 1 | = 0. When A(p) is uniform in momentum space, the Quantum Lyapunov Exponent (QLE) is zero. This indicates that for a state that is uniform in momentum space, small perturbations do not lead to an exponential divergence of trajectories and thereby, the system behaves less chaotically. This confirms with the fact that the QLE as a measure of the system's sensitivity to initial conditions. §.§ Spreading in Momentum Space In the case where A(p) is spreading in momentum space, let's suppose that the momentum distribution initially has some peak and it broadens over time as a Gaussian function: A(p) = 1/√(2πσ^2(t))exp(-(p-p_0)^2/2σ^2(t)), where p_0 is the peak momentum, σ(t) is the standard deviation of the distribution at time t, which we assume to be increasing as σ(t) = √(D t), with D being the diffusion constant. The square of the absolute value |A(p)|^2 in this case becomes: |A(p)|^2 = 1/2πσ^2(t)exp(-(p-p_0)^2/σ^2(t)). The integral of |A(p)|^2 over all p is equal to 1 due to the normalization of the Gaussian function: ∫ |A(p)|^2 dp= ∫_-∞^∞1/2πσ^2(t)exp(-(p-p_0)^2/σ^2(t)) dp = 1. Substituting this into the QLE formula yields: QLE≈ -1/nTln| ∫ |A(p)|^2 dp | = -1/nTln| 1 | = 0. Although, the Quantum Lyapunov Exponent (QLE) is zero, in this case, it does not necessarily mean that the system is not chaotic, but rather that the fidelity, as we have defined it, is insensitive to the spreading of the wave packet in momentum space. It merely measures the overlap of the wave packet with itself, not how it spreads over time. If the wave packet is spreading, but not changing its overall shape i.e., it remains Gaussian, then constant fidelity causes the QLE to become zero. However, the spreading can still lead to sensitive dependence on initial conditions if we consider the positions of individual particles. In this case, we might need to use other measures of quantum chaos, such as the out-of-time-order correlator (OTOC), to fully capture the chaotic behavior. §.§ Contracting in Momentum Space Let's analyze the situation where A(p) represents a contraction in momentum space over time. This could represent a physical scenario where the wave packet in momentum space is initially broad and then narrows down. We'll again represent the momentum distribution by a Gaussian function, which now shrinks over time: A(p) = 1/√(2πσ^2(t))exp(-(p-p_0)^2/2σ^2(t)), where p_0 is the peak momentum, and σ(t) is the standard deviation of the distribution at time t, which we now assume to be decreasing as σ(t) = 1/√(t), where we're considering t > 0. The square of the absolute value |A(p)|^2 in this case becomes: |A(p)|^2 = 1/2πσ^2(t)exp(-(p-p_0)^2/σ^2(t)). The integral of |A(p)|^2 over all p is again equal to 1 due to the normalization of the Gaussian function: ∫ |A(p)|^2 dp = ∫_-∞^∞1/2πσ^2(t)exp(-(p-p_0)^2/σ^2(t)) dp = 1. Substituting this into the QLE formula yields: QLE≈ -1/nTln| ∫ |A(p)|^2 dp | = -1/nTln| 1 | = 0. Similar to the previous case, we might need different measure for chaos such as Out-of-Time-Order Correlator (OTOC) or more advanced tools from quantum information theory. §.§ Oscillating in Momentum Space In an oscillating scenario, we can assume A(p) periodically changes between two states. For simplicity, we will use sinusoidal functions to represent the oscillations. We can represent this as: A(p) = A_0 cos(p ω t + ϕ), where A_0 is the amplitude, ω is the frequency, t is the time, and ϕ is the phase of the oscillation. Then, |A(p)|^2 becomes: |A(p)|^2 = A_0^2 cos^2(p ω t + ϕ). The integral of |A(p)|^2 over all p would have to be evaluated numerically, and it will also be a function of time due to the oscillations. Therefore, it will generally be a complex expression that may not have a simple analytic solution: ∫ |A(p)|^2 dp = ∫_-∞^∞ A_0^2 cos^2(p ω t + ϕ) dp. We substitute this into the QLE formula: QLE≈ -1/nTln| ∫ |A(p)|^2 dp |. In this case, the Quantum Lyapunov Exponent (QLE) will fluctuate as well as the magnitude of the momentum distribution oscillates over time. In particular, when the momentum distribution is minimal, the fidelity will decrease, resulting in a higher QLE, and vice versa. As such, the behavior of the QLE can be quite complex, reflecting the chaotic nature of the system. § OPTIMIZATION FRAMEWORK FOR THE COMPLEX STATE Let's assign the following weights (or coefficients): w_1 to localized, w_2 to uniform, w_3 to spreading, w_4 to contracting, and w_5 to oscillating states. Then, a complex state, |Ψ⟩, can be written as: |Ψ⟩ = w_1|localized⟩ + w_2|uniform⟩ + w_3|spreading⟩ + w_4|contracting⟩ + w_5|oscillating⟩ where |localized⟩, |uniform⟩, |spreading⟩, |contracting⟩, and |oscillating⟩ represent the states corresponding to each of these scenarios, and the w_i are complex coefficients such that |w_1|^2+|w_2|^2+|w_3|^2+|w_4|^2+|w_5|^2 = 1 for normalization purposes. The corresponding momentum distribution function A(p) would then be a superposition of the different forms we have considered, each weighted by their corresponding w_i. If we denote by A_i(p) the form of the momentum distribution in the i-th scenario, then the total momentum distribution would be: A(p) = w_1 A_1(p) + w_2 A_2(p) + w_3 A_3(p) + w_4 A_4(p) + w_5 A_5(p) This superposition state is a more general state and could be useful in a variety of situations. For instance, it allows us to model more complex behaviors that cannot be captured by a single type of momentum distribution. The weights w_i could be adjusted to match experimental data or to explore different theoretical scenarios. To develop an optimization framework that would help elucidate how different components or aspects of a quantum state contribute to its overall chaotic behavior, we could use a constrained optimization framework that attempts to find the weights which maximize or minimize the Quantum Lyapunov Exponent (QLE) for the complex state. Here's a broad structure of this optimization framework: Given the complex state: |Ψ⟩ = w_1|localized⟩ + w_2|uniform⟩ + w_3|spreading⟩ + w_4|contracting⟩ + w_5|oscillating⟩ We first define the QLE, λ, as a function of the weights w_i: λ = λ(w_1, w_2, w_3, w_4, w_5) Our objective function then becomes the QLE itself, which we aim to either maximize or minimize. If we denote f(w_1, w_2, w_3, w_4, w_5) = λ(w_1, w_2, w_3, w_4, w_5), then our objective function is: min/max f(w_1, w_2, w_3, w_4, w_5) This objective function is subject to the normalization constraint: g(w_1, w_2, w_3, w_4, w_5) = |w_1|^2 + |w_2|^2 + |w_3|^2 + |w_4|^2 + |w_5|^2 - 1 = 0 This forms a constrained optimization problem, which can be solved using techniques such as the method of Lagrange multipliers or optimization algorithms like Sequential Quadratic Programming (SQP), amongst others. Through this optimization framework, we can gain insight into how different components of a quantum state contribute to its overall chaotic behavior, by studying how changes in the weights w_i affect the QLE. This could be useful in applications such as quantum computing, where understanding and controlling quantum chaos could be important for improving the stability and functionality of quantum bits. § CONCLUSION In this work, we explore quantum chaos by examining the evolution of initially close states in the Quantum Kicked Rotor (QKR). Through careful analysis of the fidelity between evolving states, we derived an approximate measure of the Quantum Lyapunov Exponent (QLE), which quantifies the degree of chaos in the QKR. Derived from the notion of classical chaos, the QLE provides a quantitative measure of the sensitivity of the quantum system to initial conditions. The resulting expression provides valuable insights into the evolution of states in quantum systems and their sensitivity to initial conditions and consequently, the chaotic behaviour. Albeit QLE is not a straightforward quantum analog of the classical Lyapunov exponent due to the fundamentally different nature of quantum and classical systems, we believe that our QLE measure can be a useful tool for exploring the intricate domain of quantum chaos. We further analyzed the implications of the QLE for different initial states such as localized, uniform, and oscillating in momentum space. In the latter part of our study, we developed an optimization framework that allows for the representation of complex states as a weighted combination of simpler states. This innovative approach may significantly enhance our ability to visualize and understand the dynamics of complex quantum systems. Our work not only provides new insights into the nature of quantum chaos and the dynamical behavior of quantum systems, but also proposes practical methods for investigating these complex phenomena. The approximation of the Quantum Lyapunov Exponent and the proposed optimization framework together constitute a significant contribution to the field of quantum chaos. Future research can further refine these methods and extend them to other quantum systems, opening up new avenues for exploring the fascinating world of quantum dynamics. § DECLARATIONS §.§ Ethics approval and consent to participate Not applicable. This study did not involve human participants, human data, or human tissue. §.§ Consent for publication Not applicable. This manuscript does not contain data from any individual person. §.§ Availability of data and materials Not applicable. No new data were created or analyzed in this study. §.§ Competing interests The authors declare that they have no competing interests. §.§ Funding This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. §.§ Authors' contributions VG contributed entirely to the conception and design of the study, performed the analysis, and drafted the manuscript. All authors read and approved the final manuscript. §.§ Acknowledgements Not applicable. unsrt
http://arxiv.org/abs/2307.00972v1
20230703124407
MoVie: Visual Model-Based Policy Adaptation for View Generalization
[ "Sizhe Yang", "Yanjie Ze", "Huazhe Xu" ]
cs.LG
[ "cs.LG", "cs.CV", "cs.RO" ]
Over-The-Air Federated Learning: Status Quo, Open Challenges, and Future Directions Lina Bariah, Hikmet Sari, and Mérouane Debbah =================================================================================== Visual Reinforcement Learning (RL) agents trained on limited views face significant challenges in generalizing their learned abilities to unseen views. This inherent difficulty is known as the problem of view generalization. In this work, we systematically categorize this fundamental problem into four distinct and highly challenging scenarios that closely resemble real-world situations. Subsequently, we propose a straightforward yet effective approach to enable successful adaptation of visual Model-based policies for View generalization (MoVie) during test time, without any need for explicit reward signals and any modification during training time. Our method demonstrates substantial advancements across all four scenarios encompassing a total of 18 tasks sourced from DMControl, xArm, and Adroit, with a relative improvement of 33%, 86%, and 152% respectively. The superior results highlight the immense potential of our approach for real-world robotics applications. Videos are available at https://yangsizhe.github.io/MoVie/yangsizhe.github.io/MoVie. § INTRODUCTION Visual Reinforcement Learning (RL) has achieved great success in various applications such as video games <cit.>, robotic manipulation <cit.>, and robotic locomotion <cit.>. However, one significant challenge for real-world deployment of visual RL agents remains: a policy trained with very limited views (commonly one single fixed view) might not generalize to unseen views. This challenge is especially pronounced in robotics, where a few fixed views may not adequately capture the variability of the environment. For instance, the RoboNet dataset <cit.> provides diverse views across a range of manipulation tasks, but training on such large-scale data only yields a moderate success rate (10%∼ 20%) for unseen views <cit.>. Recent efforts have focused on improving the visual generalization of RL agents <cit.>. However, these efforts have mainly concentrated on generalizing to different appearances and backgrounds. In contrast, view generalization presents a unique challenge as the deployment view is unknown and may move freely in the 3D space. This might weaken the weapons such as data augmentations <cit.> that are widely used in appearance generalization methods. Meanwhile, scaling up domain randomization <cit.> to all possible views is usually unrealistic because of the large cost and the offline nature of existing robot data. With these perspectives combined, it is difficult to apply those common approaches to address view generalization. In this work, we commence by explicitly formulating the test-time view generalization problem into four challenging settings: a) novel view, where the camera changes to a fixed novel position and orientation; b) moving view, where the camera moves continuously around the scene, c) shaking view, where the camera experiences constant shaking, and d) novel FOV, where the field of view (FOV) of the camera is altered initially. These settings cover a wide spectrum of scenarios when visual RL agents are deployed to the real world. By introducing these formulations, we aim to advance research in addressing view generalization challenges and facilitate the utilization of robot datasets <cit.> and deployment on physical robotic platforms. To address the view generalization problem, we argue that adaptation to novel views during test time is crucial, rather than aiming for view-invariant policies. We propose , a simple yet effective method for adapting visual Model-based policies to generalize to unseen Views. leverages collected transitions from interactions and incorporates spatial transformer networks (STN <cit.>) in shallow layers, using the learning objective of the dynamics model (DM). Notably, requires no modifications during training and is compatible with various visual model-based RL algorithms. It only necessitates small-scale interactions for adaptation to the deployment view. We perform extensive experiments on 7 robotic manipulation tasks (Adroit hand <cit.> and xArm <cit.>) and 11 locomotion tasks (DMControl suite <cit.>)), across the proposed 4 view generalization settings, totaling 18×4 configurations. improves the view generalization ability substantially, compared to strong baselines including the inverse dynamics model (IDM <cit.>) and the dynamics model (DM). Remarkably, attains a relative improvement of 86% in xArm and 152% in Adroit, underscoring the potential of our method in robotics. We are committed to releasing our code and testing platforms. To conclude, our contributions are three-fold: * We formulate the problem of view generalization in visual reinforcement learning with a wide range of tasks and settings that mimic real-world scenarios. * We propose a simple model-based policy adaptation method for view generalization (), which incorporates STN into the shallow layers of the visual representation with a self-supervised dynamics prediction objective. * We successfully showcase the effectiveness of our method through extensive experiments. The results serve as a testament to its capability and underscore its potential for practical deployment in robotic systems, particularly with complex camera views. § RELATED WORK Visual generalization in reinforcement learning. Agents trained by reinforcement learning (RL) from visual observations are prone to overfitting the training scenes, making it hard to generalize to unseen environments with appearance differences. A large corpus of recent works has focused on addressing this issue <cit.>. Notably, SODA <cit.> provides a visual generalization benchmark to better evaluate the generalizability of policies, while they only consider appearance changes of agents and backgrounds. Distracting control suite <cit.> adds both appearance changes and camera view changes into DMControl <cit.>, where the task diversity is limited. View generalization in robotics. The field of robot learning has long grappled with the challenge of training models on limited views and achieving generalization to unseen views. Previous studies, such as RoboNet <cit.>, have collected extensive video data encompassing various manipulation tasks. However, even with pre-training on such large-scale datasets, success rates on unseen views have only reached approximately 10%∼20% <cit.>. In recent efforts to tackle this challenge, researchers have primarily focused on third-person imitation learning <cit.> and view-invariant visual representations <cit.>, but these approaches are constrained by the number of available camera views. In contrast, our work addresses a more demanding scenario where agents trained on a single fixed view are expected to generalize to diverse unseen views and dynamic camera settings. Test-time training. There is a line of works that train neural networks at test-time with self-supervised learning in computer vision <cit.>, robotics <cit.>, and visual RL <cit.>. Specifically, PAD is the closest to our work <cit.>, which adds an inverse dynamics model (IDM) objective into model-free policies for both training time and test time and gains better appearance generalization. In contrast, we differ in a lot of aspects: (i) we focus on visual model-based policies, (ii) we require no modification in training time, and (iii) our method is designed for view generalization specifically. § PRELIMINARIES Formulation. We model the problem as a Partially Observable Markov Decision Process (POMDP) ℳ=⟨𝒪, 𝒜, 𝒯, ℛ, γ⟩, where 𝐨∈𝒪 are observations, 𝐚∈𝒜 are actions, ℱ: 𝒪×𝒜↦𝒪 is a transition function (called dynamics as well), r ∈ℛ are rewards, and γ∈[0,1) is a discount factor. During training time, the agent's goal is to learn a policy π that maximizes discounted cumulative rewards on ℳ, i.e., max𝔼_π[∑_t=0^∞γ^t r_t]. During test time, the reward signal from the environment is not accessible to agents and only observations are available, which are possible to experience subtle changes such as appearance changes and camera view changes. Model-based reinforcement learning. TD-MPC <cit.> is a model-based RL algorithm that combines model predictive control and temporal difference learning. TD-MPC learns a visual representation 𝐳 = h (𝐨) that maps the high-dimensional observation 𝐨∈𝒪 into a latent state 𝐳∈𝒵 and a latent dynamics model d: 𝒵×𝒜↦𝒵 that predicts the future latent state 𝐳^'=d (𝐳, 𝐚) based on the current latent state 𝐳 and the action 𝐚. MoDem <cit.> accelerates TD-MPC with efficient utilization of expert demonstrations 𝒟={D_1, D_2, ⋯, D_N } to solve challenging tasks such as dexterous manipulation <cit.>. In this work, we select TD-MPC and MoDem as the backbone algorithm to train model-based agents, while our algorithm could be easily extended to most model-based RL algorithms such as Dreamer <cit.>. § METHOD We propose a simple yet effective method, visual Model-based policy adaptation for View generalization (), which can accommodate visual RL agents to novel camera views at test time. Learning objective for the test time. Given a tuple (𝐨_t, 𝐚_t, 𝐨_t+1), the original latent state dynamics prediction objective can be written as ℒ_dynamics = d( h( 𝐨_t), 𝐚_t) - h( 𝐨_𝐭+1) _2, where h is an image encoder that projects a high-dimensional observation from space 𝒪 into a latent space 𝒵 and d is a latent dynamics model d: 𝒵×𝒜↦𝒵. In test time, the observations under unseen views lie in a different space 𝒪^', so that their corresponding latent space also changes to 𝒵^'. However, the projection h learned in training time can only map 𝒪↦𝒵 while the policy π only learns the mapping 𝒵↦𝒜, thus making the policy hard to generalize to the correct mapping function 𝒵^'↦𝒜. Our proposal is thus to adapt the projection h from a mapping function h: 𝒪↦𝒵 to a more useful mapping function h^': 𝒪^'↦𝒵 so that the policy would execute the correct mapping 𝒵↦𝒜 without training. A vivid illustration is provided in Figure <ref>. We freeze the latent dynamics model d, denoted as d^⋆, so that the latent dynamics model is not a training target but a supervision. We also insert STN blocks <cit.> into the shallow layers of h to better adapt the projection h, so that we write h as h^SAE (SAE denotes spatial adaptive encoder). Though the objective is still the latent state dynamics prediction loss, the supervision here is superficially identical but fundamentally different from training time. The formal objective is written as ℒ_view = d^⋆( h^SAE( 𝐨), 𝐚) - h^SAE( 𝐨_𝐭+1) _2. Spatial adaptive encoder. We now describe more details about our modified encoder architecture during test time, referred to as spatial adaptive encoder (SAE). To keep our method simple and fast to adapt, we only insert two different STNs into the original encoder, as shown in Figure <ref>. We observe in our experiments that transforming the low-level features (i.e., RGB features and shallow layer features) is most critical for adaptation, while the benefit of adding more STNs is limited (see Table <ref>). An STN block consists of two parts: (i) a localisation net that predicts an affine transformation with 6 parameters and (ii) a grid sampler that generates an affined grid and samples features from the original feature map. The point-wise affine transformation is written as ([ x^s; y^s ])=𝒯_ϕ(G)=A_ϕ([ x^t; y^t; 1 ])=[[ ϕ_11 ϕ_12 ϕ_13; ϕ_21 ϕ_22 ϕ_23 ]]([ x^t; y^t; 1 ]) where G is the sampling grid, (x_i^t, y_i^t) are the target coordinates of the regular grid in the output feature map, (x_i^s, y_i^s) are the source coordinates in the input feature map that define the sample points, and A_ϕ is the affine transformation matrix. Training strategy for SAE. We use a learning rate 1× 10^-5 for STN layers and 1× 10^-7 for the encoder. We utilize a replay buffer with size 256 to store history observations and update 32 times for each time step to heavily utilize the online data. Implementation details remain in Appendix <ref>. § EXPERIMENTS In this section, we investigate how well an agent trained on a single fixed view generalizes to unseen views during test time. During the evaluation, agents have no access to reward signals, presenting a significant challenge for agents to self-supervise using online data. §.§ Experiment Setup Formulation of camera view variations: a) novel view, where we maintain a fixed camera target while adjusting the camera position in both horizontal and vertical directions by a certain margin, b) moving view, where we establish a predefined trajectory encompassing the scene and the camera follows this trajectory, moving back and forth for each time step while focusing on the center of the scene, c) shaking view, where we add Gaussian noise onto the original camera position at each time step, and d) novel FOV, where the FOV of the camera is altered once, different from the training phase. A visualization of four settings is provided in Figure <ref> and videos are available in https://yangsizhe.github.io/MoVie/yangsizhe.github.io/MoVie for a better understanding of our configurations. Details remain in Appendix <ref>. Tasks. Our test platform consists of 18 tasks from 3 domains: 11 tasks from DMControl <cit.>, 3 tasks from Adroit <cit.>, and 4 tasks from xArm <cit.>. A visualization of the three domains is in Figure <ref>. Although the primary motivation for this study is for addressing view generalization in real-world robot learning, which has not yet been conducted, we contend that the extensive range of tasks tackled in our research effectively illustrates the potential of for real-world application. We run 3 seeds per experiment with seed numbers 0,1,2 and run 20 episodes per seed. During these 20 episodes, the models could store the history transitions. We report cumulative rewards for DMControl tasks and success rates for robotic manipulation tasks, averaging over episodes. Baselines. We first train visual model-based policies with TD-MPC <cit.> on DMControl and xArm environments and MoDem <cit.> on Adroit environments under the default configurations and then test these policies in the view generalization setting. Since consists of two components mainly (i.e., DM and STN), we build two test-time adaptation algorithms by replacing each module: a) DM, which removes STN from and b) IDM+STN, which replaces DM in with IDM <cit.>. IDM+STN is very close to PAD <cit.>, and we add STN for fair comparison. We keep all training settings the same. §.§ Main Experiment Results Considering the extensive scale of our conducted experiments, we present an overview of our findings in Table <ref> and Figure <ref>. The detailed results for four configurations are provided in Table <ref>, Table <ref>, Table <ref>, and Table <ref> respectively. We then detail our findings below. Superiority of across domains, especially in robotic manipulation. It is observed that irrespective of the domain or configuration, consistently outperforms TD-MPC without any adaptation and other methods that apply DM and IDM. This large improvement highlights the fundamental role of our straightforward approach in addressing view generalization. Additionally, we observe that exhibits greater suitability for robotic manipulation tasks. We observe a significant relative improvement of 86% in xArm tasks and 152% in Adroit tasks, as opposed to a comparatively modest improvement of only 33% in DMControl tasks; see Table <ref>. We attribute this disparity to two factors: a) the inherent complexity of robotic manipulation tasks and b) the pressing need for effective view generalization in this domain. Challenges in handling shaking view. Despite improvements of in various settings, we have identified a relative weakness in addressing the shaking view scenario. For instance, in xArm tasks, the success rate of is only 45%, which is close to the 42% success rate of TD-MPC without adaptation. Other baselines such as DM and IDM+STN also experience performance drops. We acknowledge the inherent difficulty of the shaking view scenario, while it is worth noting that in real-world robotic manipulation applications, cameras often exhibit smoother movements or are positioned in fixed views, partially mitigating the impact of shaking view. Effective adaptation in novel view, moving view, and novel FOV scenarios. In addition to the shaking view setting, consistently outperforms TD-MPC without adaptation by 2×∼ 4× in robotic manipulation tasks across the other three settings. It is worth noting that these three settings are among the most common scenarios encountered in real-world applications. Real-world implications. Our findings have important implications for real-world deployment of robots. Previous methods, relying on domain randomization and extensive data augmentation during training, often hinder the learning process. Our proposed method enables direct deployment of offline or simulation-trained agents, improving success rates with minimal interactions. §.§ Ablations To validate the rationale behind several choices of and our test platform, we perform a comprehensive set of ablation experiments. Integration of STN with low-level features. To enhance view generalization, we incorporate two STN blocks <cit.>, following the image observation and the initial convolutional layer of the feature encoder. This integration is intended to align the low-level features with the training view, thereby preserving the similarity of deep semantic features to the training view. As shown in Table <ref>, by progressively adding more layers to the feature encoder, we observe that deeper layers do not provide significant additional benefits, supporting our intuition for view generalization. Different novel views. We classify novel views into three levels of difficulty, with our main experiments employing the medium difficulty level by default. Table <ref> presents additional results for the easy and hard difficulty levels. As the difficulty level increases, we observe a consistent decrease in the performance of all the methods. The efficacy of STN in conjunction with IDM. Curious readers might be concerned about our direct utilization of IDM+STN in the main experiments, suggesting that STN could potentially be detrimental to IDM. However, Table <ref> demonstrates that STN not only benefits our method but also improves the performance of IDM, thereby validating our baseline selection. Finetune or freeze the DM. In our approach, we employ the frozen DM as a form of supervision to guide the adaptation process of the encoder. However, it remains unverified for the readers whether end-to-end finetuning of both the DM and the encoder would yield similar benefits. The results presented in Table <ref> demonstrate that simplistic end-to-end finetuning does not outperform , thereby reinforcing the positive results obtained by . § CONCLUSION In this study, we present , a method for adapting visual model-based policies to achieve view generalization. mainly finetunes a spatial adaptive image encoder using the objective of the latent state dynamics model during test time. Notably, we maintain the dynamics model in a frozen state, allowing it to function as a form of self-supervision. Furthermore, we categorize the view generalization problem into four distinct settings: novel view, moving view, shaking view, and novel FOV. Through a systematic evaluation of on 18 tasks across these four settings, totaling 64 different configurations, we demonstrate its general and remarkable effectiveness. One limitation of our work is the lack of real robot experiments, while our focus is on addressing view generalization in robot datasets and deploying visual reinforcement learning agents in real-world scenarios. In our future work, we would evaluate on real-world robotic tasks. § ACKNOWLEDGMENT This work is supported by National Key R&D Program of China (2022ZD0161700). plain § APPENDIX § IMPLEMENTATION DETAILS In this section, we describe the implementation details of our algorithm for training on the training view and test time training in view generalization settings on the DMControl <cit.>, xArm <cit.>, and Adroit <cit.> environments. We utilize the official implementation of TD-MPC <cit.> and MoDem <cit.> which are available at https://github.com/nicklashansen/tdmpc/github.com/nicklashansen/tdmpc and https://github.com/facebookresearch/modem/github.com/facebookresearch/modem as the model-based reinforcement learning codebase. During training time, we use the default hyperparameters in official implementation of TD-MPC and MoDem. We present relevant hyperparameters during both training and test time in Table <ref> and Table <ref>. One seed of our experiments could be run on a single 3090 GPU with fewer than 2GB and it takes ∼ 1 hours for test-time training. Training time setup. We train visual model-based policies with TD-MPC on DMControl and xArm environments, and MoDem on Adroit environments, We employ identical network architecture and hyperparameters as original TD-MPC and MoDem during training time. The network architecture of the encoder in original TD-MPC is composed of a stack of 4 convolutional layers, each with 32 filters, no padding, stride of 2, 7 × 7 kernels for the first one, 5 × 5 kernels for the second one and 3 × 3 kernels for all others, yielding a final feature map of dimension 3 × 3 × 32 (inputs whose framestack is 3 have dimension 84 × 84 × 9). After the convolutional layers, a fully connected layer with an input size of 288 performs a linear transformation on the input and generates a 50-dimensional vector as the final output. The network architecture of the encoder in original Modem is composed of a stack of 6 convolutional layers, each with 32 filters, no padding, stride of 2, 7 × 7 kernels for the first one, 5 × 5 kernels for the second one and 3 × 3 kernels for all others, yielding a final feature map of dimension 2 × 2 × 32 (inputs whose framestack is 2 have dimension 224 × 224 × 6). After the convolutional layers, a fully connected layer with an input size of 128 performs a linear transformation on the input and generates a 50-dimensional vector as the final output. Test time training setup. During test time, we train spatial adaptive encoder (SAE) to adapt to view changes. We insert STN blocks before and after the first convolutional layer of the original encoders in TD-MPC and MoDem. The original encoders are augmented by inserting STN blocks, resulting in the formation of SAE. Particularly, for the STN block inserted before the first convolutional layer, the input is a single frame. This means that when the frame stack size is N, N individual frames are fed into this STN block. This is done to apply different transformations to different frames in cases of moving and shaking view. To update the SAE, we collect online data using a buffer with a size of 256. For each update, we randomly sample 32 (observation, action, next_observation) tuples from the buffer as a batch. The optimization objective is to minimize the loss in predicting the dynamics of the latent states, as defined in Equation <ref>. During testing on each task, we run 20 consecutive episodes, although typically only a few or even less than one episode is needed for the test-time training to converge. To make efficient use of the data collected with minimal interactions, we employ a multi-update strategy. After each interaction with the environment, the SAE is updated 32 times. The following is the network architecture of the first STN block inserted into the encoder of TD-MPC. [language=Python, frame=none, basicstyle=, commentstyle=,columns=fullflexible, breaklines=true, postbreak=, escapeinside=(**)] STN_Block_0_TDMPC( (localization): Sequential( # By default, each image consists of three channels. Each frame in the observation is treated as an independent input to the STN. (0): Conv2d(in_channels=3, out_channels=8, kernel_size=7, stride=1) (1): MaxPool2d(kernel_size=4, stride=4, padding=0) (2): ReLU() (3): Conv2d(in_channels=8, out_channels=10, kernel_size=5, stride=1) (4): MaxPool2d(kernel_size=4, stride=4, padding=0) (5): ReLU() ) (fc_loc): Sequential( (0): Linear(in_dim=90, out_dim=32) (1): ReLU() (2): Linear(in_dim=32, out_dim=6) ) ) The following is the network architecture of the second STN block inserted into the encoder of TD-MPC. [language=Python, frame=none, basicstyle=, commentstyle=,columns=fullflexible, breaklines=true, postbreak=, escapeinside=(**)] STN_Block_1_TDMPC( (localization): Sequential( (0): Conv2d(in_channels=32, out_channels=8, kernel_size=7, stride=1) (1): MaxPool2d(kernel_size=3, stride=3, padding=0) (2): ReLU() (3): Conv2d(in_channels=8, out_channels=10, kernel_size=5, stride=1) (4): MaxPool2d(kernel_size=2, stride=2, padding=0) (5): ReLU() ) (fc_loc): Sequential( (0): Linear(in_dim=90, out_dim=32) (1): ReLU() (2): Linear(in_dim=32, out_dim=6) ) ) The following is the network architecture of the first STN block inserted into the encoder of MoDem. [language=Python, frame=none, basicstyle=, commentstyle=,columns=fullflexible, breaklines=true, postbreak=, escapeinside=(**)] STN_Block_0_MoDem( (localization): Sequential( # By default, each image consists of three channels. Each frame in the observation is treated as an independent input to the STN. (0): Conv2d(in_channels=3, out_channels=5, kernel_size=7, stride=2) (1): MaxPool2d(kernel_size=4, stride=4, padding=0) (2): ReLU() (3): Conv2d(in_channels=5, out_channels=10, kernel_size=5, stride=2) (4): MaxPool2d(kernel_size=4, stride=4, padding=0) (5): ReLU() ) (fc_loc): Sequential( (0): Linear(in_dim=90, out_dim=32) (1): ReLU() (2): Linear(in_dim=32, out_dim=6) ) ) The following is the network architecture of the second STN block inserted into the encoder of MoDem. [language=Python, frame=none, basicstyle=, commentstyle=,columns=fullflexible, breaklines=true, postbreak=, escapeinside=(**)] STN_Block_1_MoDem( (localization): Sequential( (0): Conv2d(in_channels=32, out_channels=8, kernel_size=7, stride=2) (1): MaxPool2d(kernel_size=3, stride=3, padding=0) (2): ReLU() (3): Conv2d(in_channels=8, out_channels=10, kernel_size=5, stride=2) (4): MaxPool2d(kernel_size=2, stride=2, padding=0) (5): ReLU() ) (fc_loc): Sequential( (0): Linear(in_dim=90, out_dim=32) (1): ReLU() (2): Linear(in_dim=32, out_dim=6) ) ) § ENVIRONMENT DETAILS We categorize the view generalization problem into four distinct settings: novel view, moving view, shaking view, and novel FOV. In this section, we provide descriptions of the implementation details for each setting. The detailed camera settings can be referred to in the code of the environments that we are committed to releasing or in the visualization available on our website https://yangsizhe.github.io/MoVie/yangsizhe.github.io/MoVie. Novel view. In this setting, for locomotion tasks (cheetah-run, walker-stand, walker-walk, and walker-run), the camera always faces the moving agent, while for other tasks, the camera always faces a fixed point in the environment. Therefore, as we change the camera position, the camera orientation also changes accordingly. Moving view. Similar to the previous setting, the camera also always faces the moving agent or a fixed point in the environment. The camera position varies continuously. Shaking view. To simulate camera shake, we applied Gaussian noise to the camera position (XYZ coordinates in meters) at each time step. For DMControl and Adroit, the mean of the distribution is 0, the standard deviation is 0.04, and we constrain the noise within the range of -0.07 to +0.07. For xArm, the mean of the distribution is 0, the standard deviation is 0.4, and we constrain the noise within the range of -0.07 to +0.07. Novel FOV. We experiment with a larger FOV. For DMControl, we modify the FOV from 45 to 53. For xArm, we modify the FOV from 50 to 60. For Adroit, we modify the FOV from 45 to 50. We also experiment with a smaller FOV and results are presented in Appendix <ref>. § VISUALIZATION OF FEATURE MAP TRANSFORMATION We visualize the first layer feature map of the image encoder from TD-MPC and MoVie in Figure <ref>. It is observed that the feature map from MoVie on the novel view exhibits a closer resemblance to that on the training view. § EXTENDED DESCRIPTION OF BASELINES TD-MPC. We test the agent trained on training view without any adaptation in the view generalization settings, DM. This is derived from MoVie by removing STN blocks, which just adapts encoder during test time. IDM+STN. This is derived from MoVie by replacing the dynamics model with the inverse dynamics model which predicts the action in between based on the latent states before and after transition. The inverse dynamics model is finetuned together with the encoder and STN blocks during testing. § ABLATION ON DIFFERENT FOVS In our main experiments, we consider the novel FOV as a FOV larger than the original. In Table <ref>, we present results for both smaller and larger FOV scenarios. Our method demonstrates the successful handling of both cases. § COMPARISON WITH OTHER VISUAL RL GENERALIZATION ALGORITHMS In addition to baselines in the main paper, we also compare with state-of-the-art methods that focus on visual generalization in RL, i.e., PIE-G <cit.> and SGQN <cit.>. Results on DMControl environments are shown in Table <ref>. It is observed that still outperforms these two methods that do not adapt in test time, while PIE-G could also achieve reasonable returns under the view perturbation. This may indicate that fusing the pre-trained networks from PIE-G into might lead to better results, which we leave as future work. Note that unlike , SGQN and PIE-G need strong data augmentation and modification during training time. § RESULTS OF ORIGINAL MODELS ON TRAINING VIEW The performance of the original agents without any adaptation under the training view is reported in Table <ref>, <ref>, and <ref> for reference. In the context of view generalization, it is evident that the performance of agents without adaptation significantly deteriorates.
http://arxiv.org/abs/2307.01111v1
20230703153555
Nonparametric Bayesian approach for quantifying the conditional uncertainty of input parameters in chained numerical models
[ "Oumar Baldé", "Guillaume Damblin", "Amandine Marrel", "Antoine Bouloré", "Loïc Giraldi" ]
stat.CO
[ "stat.CO", "stat.ME", "60G15, 62G07, 62J05, 62F15" ]
: A Graph Transformer for Semantic Segmentation of 3D Meshes Giuseppe Vecchio1, Luca Prezzavento1, Carmelo Pino2, Francesco Rundo2, Simone Palazzo1, Concetto Spampinato1 1 Department of Computer Engineering, University of Catania 2 ADG, R&D Power and Discretes, STMicroelectronics August 1, 2023 ========================================================================================================================================================================================================================================================================= Nowadays, numerical models are widely used in most of engineering fields to simulate the behaviour of complex systems, such as for example power plants or wind turbine in the energy sector. Those models are nevertheless affected by uncertainty of different nature (numerical, epistemic) which can affect the reliability of their predictions. We develop here a new method for quantifying conditional parameter uncertainty within a chain of two numerical models in the context of multiphysics simulation. More precisely, we aim to calibrate the parameters θ of the second model of the chain conditionally on the value of parameters λ of the first model, while assuming the probability distribution of λ is known. This conditional calibration is carried out from the available experimental data of the second model. In doing so, we aim to quantify as well as possible the impact of the uncertainty of λ on the uncertainty of θ. To perform this conditional calibration, we set out a nonparametric Bayesian formalism to estimate the functional dependence between θ and λ, denoted θ(λ). First, each component of θ(λ) is assumed to be the realization of a Gaussian process prior. Then, if the second model is written as a linear function of θ(λ), the Bayesian machinery allows us to compute analytically the posterior predictive distribution of θ(λ) for any set of realizations λ. The effectiveness of the proposed method is illustrated on several analytical examples. Keywords. Conditional Bayesian calibration, nonparametric regression, Gaussian process, empirical Bayes, cut-off models. AMS classification: 60G15, 62G07, 62J05, 62F15. § INTRODUCTION Over the last thirty years, numerical models have become important tools for modeling, understanding, analyzing and predicting the physical phenomena studied. In nuclear engineering, such models are essential because the physical experiments are often limited or even impossible for economic or ethical reasons (e.g., simulation of accidental transients for safety demonstration). The models, implemented to represent the physical reality faithfully, may depend on the specification of a large number of calibration parameters characterizing the studied phenomenon which are most often uncertain. Such parameters uncertainties are often due to a lack of knowledge of the phenomenon and, as a consequence, to some modeling assumptions. The process of quantifying parameter uncertainty according to the available experimental data is called model calibration. There are basically two types of calibration: deterministic calibration and calibration under uncertainties using Bayesian framework (Bayesian calibration) <cit.>. Bayesian calibration allows us to quantify the parameter uncertainty by probability distribution instead of a single optimal value as in deterministic calibration. Such an optimal value is generally obtained by minimizing a criterion which quantifies a difference between the simulated data and the available experimental data (e.g., least square criterion). This paper focuses on Bayesian calibration of models in the framework of multiphysics simulation. Multiphysics simulation refers to several numerical models of different physics which are connected to one another to simulate the entire phenomenon of interest. For example, in fuel simulation for nuclear power plants, the ALYCONE application is a multiphysics application <cit.>. It is composed of interdependent physical models that represent the mechanical, thermal and chemical behaviors of fuel rods in the core of pressurized water reactors. Among these different models, we will focus here on the thermal and the fission gas behavior models. The thermal model simulates the evolution of the temperature within the fuel rod during the fission reaction and provides as output, the associated temperature field. The fission gas behavior model, as a function of the thermal model, continuously represents the behavior of the fission products (fuel swelling and release of fission gas atoms) during the fission reaction. Figure <ref> represents the chaining of these two models and their integration within the global calculation workflow. These models represented by blue boxes have as input parameters, respectively, the thermal conductivity λ∈q (q= 1) and the parameters of the fission gas behavior model denoted θ∈p (p≥1). In this work, we are interested in the calibration of the parameters θ of the Model 2 (fission gas behavior model in Figure <ref>) conditionally on the parameter λ of the Model 1 (thermal model in Figure <ref>) where the probability density of λ is assumed to be known and need not be re-estimated (or calibrated). More precisely, we aim to estimate a conditional posterior probability density of θ|λ using experimental measurements of Model 2 (fission gas behavior model in our application context). Unless specific modelling assumptions, a posterior probability density is known upto a constant, so its estimation most often requires the use of Markov chain Monte Carlo (MCMC) methods <cit.>. For the purpose of computing the probability distribution of θ|λ, a naive approach would be to run as many independent Markov chains as the number of λ samples of interest. This approach is computationally expensive and omits that, under some regularity conditions, the conditional distribution θ|λ may give some informations about θ|λ^' if λ is not too far from λ^'. To overcome the above drawback, we propose a nonparameteric approach which directly learn the functional relation θ(λ) and infers its probability distribution. This approach is inspired by the work of <cit.> presented in another calibration context. The functional approach is based on the assumption that each component of θ(λ) is represented a priori as a trajectory of an independent Gaussian process. Moreover, it is implemented in the special case where the output of Model 2 is assumed to be linear in θ given λ. The proposed method eventually provides a posterior predictive probability distribution of θ conditionally on any λ and its feasibility is studied on analytical examples. This paper is divided into five sections. In Section <ref>, we present a short state of the art of Bayesian calibration of numerical models. Section <ref> is devoted to possible methods for conditional density estimation. In Section <ref>, we detail our approach, called GP-LinCC for Gaussian process and Linearization-based Conditional Calibration, and whose performances are illustrated in Section <ref>. Section <ref> concludes the paper. § BAYESIAN CALIBRATION OF NUMERICAL MODELS A physical system of interest r(x) ∈𝒴⊂d^' (here d^' = 1) can be seen as a function describing the relationship between the input x and the output of interest r(x). The vector x ∈𝒳⊂d is most often constituted of variables called control variables which typically represent experimental conditions and geometry of the physical system. Let y_θ, λ(x):=y^2_θ(y^1_λ(x)) be a deterministic numerical model resulting from the chaining of two physical models (like in Figure <ref>) and supposed to be representative of r(x). Inspired by the work of <cit.>, the experimental measurements z=(z_1,⋯, z_n)^t are related to the model outputs at the input values (x_i)_i=1^n by the following general probabilistic equation: z_i= y_θ, λ(x_i) + b(x_i) +ϵ_i, 1≤ i≤ n, where ϵ_i is the experimental uncertainty often considered as a realization of a zero-mean Gaussian distribution and the variance σ^2_ϵ_i is known. The unobserved term b(x), called model discrepancy in <cit.>, represents the gap between the numerical model y_θ, λ(x) and the physical system r(x) when the model is run at the optimal value (or true) but unknown (θ, λ) [Optimal value in the sense that the model run with (θ, λ) gives the best possible prediction accuracy. Note that the best parameter can be different from the true parameter <cit.>.] of the uncertain parameters. The term b(x) is often modeled by a Gaussian process <cit.>. In the literature dedicated to Bayesian calibration, the term b(x) is sometimes neglected and thus assumed to be indistinguishable from the experimental uncertainty <cit.>. Similarly, we will instead consider the simplified probabilistic equation: z_i = y_θ, λ(x_i) +ϵ_i, 1≤ i≤ n. The joint posterior distribution is obtained by using Bayes' formula: π_full(θ, λ|z)∝(z|θ, λ)π(θ, λ), where π(θ, λ) is the prior distribution quantiying the uncertainty of (θ,λ) before collecting the data z, (z|θ,λ) is the likelihood of the data z conditionally on the couple (θ,λ) and the posterior distribution π_full(θ, λ|z) quantifies the residual uncertainty of (θ,λ) conditionally on z. Sometimes, as in fuel simulation, there may be other sources of data which can bring information on some components of (θ,λ). In such cases, cut-off models can be used to properly partition the different sources of data that can provide information on the model parameters involved in the chaining <cit.>. Inspired by the work of <cit.>, Figure <ref> presents a cut-off model for the two chained physical models where the direct measurements w (a realization of a some random variable W) bring information on λ whereas the data z (a realization of random variable Z) are seen by experts as uninformative on this parameter. More precisely, in Figure <ref>, the graph is divided by a cut into two subgroups (left and rigth of the dotted red line). The posterior distribution of λ of the left subgroup is computed without considering the random variables of the right subgroup. Thus, the estimation of this posterior distribution is done only with the observation w of W despite its possible dependence on Z. On the opposite, when estimating the posterior distribution of θ of the right subgroup, the terms of the left subgroup are taken into account. Then, we can write the probability distribution of the parameters (θ, λ) conditionally on the complete data (W=w, Z=z) as: π_cut(θ,λ|w,z)=π(θ|λ,z)π(λ|w), where π(λ|w)∝(w|λ)π(λ) is the posterior distribution when w is observed only and π(θ|λ,z) is the posterior distribution of θ conditionally on λ. The distribution π_cut(θ,λ|w,z), called the cut distribution in <cit.>, is different from the true joint posterior probability distribution π_full(θ,λ|w,z) which is written as: π_full(θ,λ|w,z)=π(θ|λ,z)π(λ|w,z). The two distributions are linked by the following relation: π_cut(θ,λ|w,z) ∝π_full(θ,λ|w,z) /π(z|λ). Indeed, we have: π_full(θ,λ|w,z) = (z|θ,λ)π(θ|λ) ×(w|λ)π(λ)/π(w,z) = π(θ|λ,z)π(z|λ) ×π(λ|w)π(w)/π(w,z) =π_cut(θ,λ|w,z)×π(z|λ)/π(z|w). In the sequel, the posterior distribution π(λ|w) is assumed to be well-known and thus our goal comes down to estimating only the conditional posterior distribution π(θ|λ,z) in Equation (<ref>). In the next section, we present some potential methods to achieve this goal. § APPROACHES FOR CONDITIONAL DENSITY ESTIMATION We aim to estimate the distribution of θ conditionally on any λ varying in the support of π(λ|w) and deduce an associated estimator for θ(λ). At first glance, one could use a kernel density estimator (KDE) <cit.> to approximate the conditional posterior distribution π(θ|λ,z). This density estimator would require a large number N of samples {θ_i,λ_i}_i=1^N of π_cut(θ,λ|w,z) which could be generated by a Gibbs algorithm with possibly an inner Metropolis step (also called Metropolis-within-Gibbs) <cit.> as follows: θ_i∼π_cut(θ,λ_i-1|w,z)∝π_full(θ,λ_i-1|w,z) (Equation (<ref>)), λ_i∼π_cut(λ,θ_i|w,z)∝π_full(θ_i,λ|w,z) /π(z|λ), 1≤ i≤ N. However, in the general case, the marginal likelihood in the denominator of Equation (<ref>) is computationally intractable π(z|λ) =∫(z|θ,λ)π(θ|λ)dθ. As a result, the above sampling scheme is unfeasible because we are not able to sample the λ_i according to Equation (<ref>). Therefore, KDE-based appraoch could not be used with this sampling scheme to approximate π(θ|λ,z). Another solution may be for each λ drawn in π(λ|w), to use a Metropolis-Hastings algorithm (MH) <cit.> to approximate the associated conditional posterior distribution. More precisely, we would have the following MCMC sampling scheme: λ_i ∼π(λ|w), θ_i ∼π(θ|λ_i,z), 1≤ i≤ N. However, if the numerical model is computationally expensive, this MH-based sampling scheme could not be repeated for a large number of different λ. This is why we can envisage to carry out this scheme by considering only a limited number m of well-chosen values of λ, then using them to build a regression model to extrapolate the conditional distribution to another values of λ. This is presented hereafter. §.§ Moment-based estimation method We begin by creating a numerical design for λ, D_m:=(λ_1,⋯, λ_m)^t of size m by Latin hypercube sampling <cit.> to spread out the samples in the support of the distribution π(λ|w). Then, we estimate independently each conditional distribution π(θ|λ=λ_j, z) for j=1,⋯, m. Finally, we interpolate by a Gaussian process (GP) regression <cit.> the first two moments of the conditional posterior distribution of θ with respect to D_m in order to predict them at new realizations λ^⋆∉ D_m. More precisely, the steps are the following: * Creation of a numerical design D_m according to π(λ|w), * Computation of the m conditional posterior distributions π(θ|λ_j,z) by running a MH algorithm for each 1 ≤ j≤ m. Let (θ^j_i)_i=1^N be the resulting set of samples associated with λ_j and generated from π(θ|λ_j, z). The expectation and covariance matrix are then estimated by θ̅(λ_j):=(θ|λ_j,z)≈1/N∑_i=1^Nθ^j_i, (θ|λ_j,z)≈1/N∑_i=1^N (θ^j_i- θ̅(λ_j) )(θ^j_i- θ̅(λ_j) )^t. * Fitting of a GP model on the quantities (<ref>) and (<ref>) respectively based on their estimated values on D_m. From the two first moments learnt by the two GP, a Gaussian assumption is finally used to build a predictive distribution associated with any new realization λ^⋆∉ D_m. The main difficulty of Step (iii) lies in preserving the positive semi-definite property of covariance matrices in the GP regression model. There are some solutions to satisfy this property: * for p=1, GP regression on the log variance can be considered, * for p>1, the solution proposed in <cit.> is to first carry out a regression on the Cholesky factors and then to use the inverse Cholesky decomposition to obtain predicted matrices ensuring the positive semi-definite property. However, this method is rather complex and omits any assessment of the prediction uncertainty. To circumvent the use of an MH algorithm for estimating π(θ(λ)|z):=π(θ|λ,z) in Step (ii), we assume in the remainder of the paper that the output of the numerical model y_θ,λ(x) is linear in θ conditionally on λ. Regarding the fuel application which constitutes the applicative framework, the available simulations of the fission gas behavior model show that this linearity assumption is reasonable. With this assumption, Steps (i) and (ii) are easier to implement because MH sampling is not required anymore. However, the problem of interpolating covariance matrices remains. This is why we propose in the next section a new method for the approximation of π(θ(λ)|z) still under the framework of the linear assumption. §.§ Proposed solution: method based on GP-prior and linear assumption (GP-LinCC method) We aim to estimate the conditional posterior distribution π(θ|z,λ) for any realization λ. Our approach is to learn the relationship between θ and λ through the function θ(λ) (called the calibration function) using the Bayesian paradigm based on three assumptions: a Gaussian process prior, the linearity of the output of the numerical model in θ(λ) and the compensation hypothesis that will be presented later (see Subsection <ref>). §.§.§ Gaussian Process prior Among the metamodels classically used for approximating computationally intensive numerical models, the GP regression is a powerful tool for nonparametric function metamodeling <cit.>. The theory of GP is extensively detailed in <cit.> and more recently in <cit.>. Choosing it as a prior class of random functions (characterized by its mean and covariance functions) and conditioning on observed data yields a posterior GP which delivers a Gaussian predictive distribution for the model output at each prediction point with anaytical formulas for mean and variance-covariance matrix <cit.> (see Appendix <ref>). In our case, we are quantifying the relationship between θ and λ by learning the function θ(λ)∈p. We suppose each component of θ(λ) follows an independent GP which can be written as θ_u(λ)indep∼𝒢𝒫(m_β_u(λ), σ_u^2 K_ψ_u (λ, λ^')), 1 ≤ u ≤ p, where m_β_u(λ) is the mean function (also called trend) of the u-th GP which has to be specified or estimated. A constant m_β_u(λ) = β_u or a one-degree polynomial trend is usually considered and recommended in practice. But any linear regression model on a set of known basis functions could be used instead. For simplicity, we will assume in the sequel that the prior mean is a constant β_u. The covariance function σ_u^2K_ψ_u(λ, λ^') controls the regularity and scale of the trajectories of the GP (ψ_u is the correlation length and σ_u^2 is the variance parameter). The covariance function σ_u^2K_ψ_u(λ, λ^') is positive semi-definite and encodes the dependence structure of the u-th GP. One of the most popular choice of the covariance function is the Matérn 5/2 covariance function, as recommended in <cit.>, among others. The Matérn 5/2 covariance function is given by: σ_u^2K_ψ_u(λ, λ^')=σ_u^2 (1+√(5)|λ-λ^'|/ψ_u+5/3|λ-λ^'|/ψ_u^2) e^-√(5)|λ-λ^'|/ψ_u, In the multidimensional case (i.e., λ∈q, q≥ 1), we can use a tensorization of 1D-Matérn 5/2 covariance function. §.§.§ Linear case The interest for a linear framework is that the posterior distribution of θ conditionally on λ can be computed explicitly without any use of MCMC algorithms. We will consider that for any λ, the output of the numerical model y_θ,λ(x_i) is linear in θ(λ). Thus, (<ref>) becomes: z_i=g_λ, 0(x_i) + g_λ, 1(x_i)^tθ(λ) + ϵ_i, 1≤ i≤ n, where ϵ_i ∼(0, σ^2_ϵ_i + δ^2_i) with δ^2_i being an extra scale parameter modelling the possible linearization error. In practice, the coefficients g_λ(x_i):=(g_λ, 0(x_i), g_λ, 1(x_i)) are often unknown and must be estimated by linearizing the code in θ(λ) at a fixed (λ, x_i). To perform the linearization, we fit for each (λ_j)_j=1,⋯, m∈ D_m and for each x_i, a linear regression from a set of training samples. More precisely, for each (λ_j)_j=1,⋯, m, a n_sim-size random sample of θ denoted Θ_j is generated and the corresponding simulations y_Θ_j, λ_j(x_i) are run. In the end, a total of m × n × n_sim simulations are performed. Each set of the n regression coefficients of these m different linear regressions will constitute the estimates of the coefficients (g_λ_j(x_i))_i=1, ⋯,n for each (λ_j)_j=1,⋯, m∈ D_m. §.§.§ Compensation hypothesis In the context of the fuel application considered and introduced in Section <ref>, the compensation hypothesis is considered by experts as a necessary condition to address the problem of conditional calibration in a consistent way. This hypothesis states that the experimental data z of the physical quantities of interest provide negligible information on the uncertainty of λ (compared to that brought by w). And the same is desired for the corresponding simulated quantities y^2_θ(y^1_λ(x)) with x=(x_1, ⋯, x_n)^t (Figure <ref>). This means that π(λ|z,w) must be as close as possible to π(λ|w) in the sense of a dissimilarity measure (e.g. the Kullback-Leibler divergence). From Equations (<ref>) and (<ref>), this is equivalent to postulating that π_cut(θ,λ|z,w) is as close as possible to π_full(θ,λ|z,w), which happens when the ratio of the two marginal densities π(z|w) and π(z|λ) comes to 1 (see Equation (<ref>)). Therefore, this hypothesis will be satisfied if the likelihood of the data z conditionally on the couple (θ,λ) is non-identifiable. That is to say that for any λλ^', there would exist θ(λ) θ(λ^') giving the same likelihood <cit.>. § GP-LINCC METHOD: ESTIMATION AND PREDICTION §.§ GP-LinCC method resulting from Gaussian process prior and Linearization The combination of a Gaussian prior given by Equation (<ref>) with a Gaussian likelihood due to the linearity hypothesis leads to an explicit posterior Gaussian distribution: we speak in this case of a conjugate family <cit.>. Thus, a Gaussian predictive distribution of θ conditionally on any λ of non-zero probability with respect to π(λ|w) is obtained, as detailed in Subsection <ref>. §.§ GP-LinCC method in practice To apply GP-LinCC method in practice to calibrate θ conditionally on λ, we need, in addition to the three basic assumptions described above (compensation, linearity of numerical model and GP prior), the following data: * a set of experimental data noted z of size n, * a set of simulations of the numerical model {y_θ, λ(x_i)}_i=1, ⋯, n for different values of λ and θ to estimate the coefficients (g_λ(x_i))_i=1, ⋯,n=(g_λ, 0(x_i), g_λ, 1(x_i))_i=1, ⋯,n of the linear model of Equation (<ref>). In the sequel, it is assumed that the coefficients of (g_λ(x_i))_i=1, ⋯,n of the linear model of Equation (<ref>) are well known and that g_λ, 0(x_i)=0, ∀ 1≤ i≤ n. §.§ Estimation: calculation of the posterior distribution Equation (<ref>) is applied to the m realizations of λ to learn the relation between θ and λ. Due to the compensation hypothesis presented above, we can then write m equations involving the vector of experimental data z=(z_1,⋯, z_n)^t∈n: z=g_λ_j(x)θ(λ_j) + ϵ, 1≤ j≤ m where g_λ_j(x):=(g_λ_j(x_1)^t,⋯,g_λ_j(x_n)^t )^t∈n × p and ϵ:=(ϵ_1,⋯,ϵ_n)^t. By summarizing these m equations into a single matrix equation, we obtain: (z,⋯,z)= (g_λ_1(x)θ(λ_1),⋯,g_λ_m(x)θ(λ_m) ) +(ϵ,⋯,ϵ). Let 𝐳:=(z,⋯,z)∈n× m be the matrix of the m copies of z and the associated macro parameter Θ_m:=(θ(λ_1),⋯,θ(λ_m))^t ∈m× p. Each component of this macro parameter is a realization of a multivariate normal distribution according to Equation (<ref>). Then, one way to infer the random matrix Θ_m is to work on the vectorized form of Θ_m noted Θ_m∈pm <cit.> (see Appendix <ref>). Therefore, the prior distribution on Θ_m is written as: π(Θ_m|ϕ) ∝ |_ϕ|^-1/2exp-1/2(Θ_m-M_β)^t _ϕ^-1(Θ_m-M_β), where M_β =(m_β_1(λ_1),⋯,m_β_p(λ_1),⋯, m_β_1(λ_m),⋯, m_β_p(λ_m))^t∈ℝ^pm, _ϕ ={(θ(λ_j),θ(λ_j^') ) }_j,j^'= 1 ^m, with (θ(λ_j),θ(λ_j^'))=diag{σ_l^2 K_ψ_l(λ_j, λ_j^')}_l=1^p, and ϕ:=(β_l,σ^2_l,ψ_k)^p_l=1. The expression of the likelihood of 𝐳 conditionally on Θ_m is given by: (|Θ_m) =(z_1=z,⋯,z_m=z|θ(λ_1),⋯,⋯θ(λ_m)) ∝∏_j=1^m exp-1/2(z- g_λ_j(x)^tθ(λ_j))^tΣ_ϵ^-1(z- g_λ_j(x)^tθ(λ_j)), with Σ_ϵ:=diag(σ^2_ϵ_1 +δ^2_1,⋯,σ^2_ϵ_n+δ^2_n). Finally, the posterior probability distribution is given by the Bayes' formula: π(Θ_m|,ϕ)∝(| Θ_m)π(Θ_m|ϕ). The posterior probability distribution π(Θ_m|,ϕ) is a multivariate normal distribution with mean [Θ_m|,ϕ] and covariance matrix _ϕ given below: [Θ_m|,ϕ]=_ϕ(_ϕ^-1M_β + G^tΣ_ϵ^-1z) ∈pm, _ϕ= (Δ^-1+_ϕ^-1)^-1∈pm × pm, where G=(g_λ_1(x),⋯,g_λ_m(x)) ∈n× pm, Δ^-1=diag(g_λ_1(x)^tΣ_ϵ^-1g_λ_1(x),⋯,g_λ_m(x)^tΣ_ϵ^-1g_λ_m(x) ). See Appendix <ref> for the proof of the theorem. From this posterior distribution, we can deduce a predictive distribution denoted π_pred(θ(λ^⋆)|, ϕ) associated with any new realization or set of realizations λ^⋆∉ D_m. §.§ Prediction: calculation of the predictive distribution For any new set of realizations λ^⋆=(λ_1^⋆,⋯,λ_k^⋆)^t belonging to the support of the distribution π(λ|w), a predictive distribution of θ(λ^⋆) can be derived by integrating the conditional Gaussian distribution π(θ(λ^⋆)|Θ_m,ϕ) over the posterior probability measure π(Θ_m|,ϕ)dΘ_m given by Theorem <ref>: π_pred(θ(λ^⋆)|, ϕ) = ∫π(θ(λ^⋆)|Θ_m,ϕ) π(Θ_m|, ϕ) dΘ_m. The predictive distribution π_pred(θ(λ^⋆)|, ϕ) is a multivariate normal distribution with mean and covariance matrix respectively denoted by θ̅_pred(λ^⋆) and Σ_pred(λ^⋆,λ^⋆^') such that θ_pred(λ^⋆) := m⃗_β(λ^⋆)+ 𝐂(λ^⋆,D_m)_ϕ^-1([Θ_m|,ϕ]-M⃗_β), Σ_pred(λ^⋆,λ^⋆^') := _cond(λ^⋆,λ^⋆^') + 𝐂(λ^⋆,D_m)_ϕ^-1 _ϕ _ϕ^-1𝐂(D_m,λ^⋆^'), where m⃗_β(λ^⋆) =(m_β_1(λ^⋆_1),⋯,m_β_p(λ^⋆_1),⋯,m_β_1(λ^⋆_k),⋯,m_β_p(λ^⋆_k) )^t ∈pk, 𝐂(λ^⋆,D_m) ={( θ(λ^⋆_i),θ(λ_j) ) }_ 1 ≤ i ≤ k, 1 ≤ j ≤ m∈pk × pm, _cond(λ^⋆,λ^⋆^')= 𝐂(λ^⋆,λ^⋆^') - 𝐂(λ^⋆,D_m)_ϕ^-1𝐂(D_m,λ^⋆^') ∈pk × pk. The expression of θ̅_pred(λ^⋆) corresponds to the classical expression of the mean of a conditional GP where the unknown quantity Θ_m is replaced by its posterior expectation [Θ_m|,ϕ]. The mean θ_pred(λ^⋆) is a predictor of θ(λ^⋆) which does not require the knowledge of g_λ^⋆(x_i)_i=1^n for any λ^⋆. The predictive covariance matrix Σ_pred(λ^⋆,λ^⋆^') is composed of two terms. The first one _cond(λ^⋆,λ^⋆^') corresponds to the classical formula for the covariance matrix of a conditional GP when Θ_m is known (we would have in this case Σ_pred(λ^⋆,λ^⋆^')=_cond(λ^⋆,λ^⋆^')). The additional term is related to the lack of knowledge of Θ_m conveyed by _ϕ. Appendix (<ref>) details the calculations of this predictive distribution. All the previous formulas involve the hyperparameters ϕ which are never known in advance. We can estimate them either by likelihood maximization or by cross-validation <cit.>. This adds a preliminary step to enable us to evaluate the distributions of Theorems <ref> and <ref>. In this paper, the hyperparameters ϕ are estimated by marginal likelihood maximization <cit.>. This technique, kwown as empirical Bayes, consists in maximizing the marginal likelihood in ϕ obtained by integrating the likelihood (|Θ_m) over Θ_m: ϕ:=_ϕ∫(|Θ_m)π(Θ_m|ϕ)dΘ_m. Finally, the accuracy of the predictive distribution in Theorem <ref> can be evaluated by comparison with the distribution that we would obtain if using g_λ^⋆(x_i)_i=1^n along with a Jeffreys prior on θ(λ^⋆). The expression of this distribution, called target distribution, is given by: π_target(θ(λ^⋆)|z1^t_k) ∼_pk(θ̅_target(λ^⋆), Σ_target(λ^⋆,λ^⋆^') ). It is obtained from Theorem <ref> where the matrix _ϕ^-1, the vector M⃗_β and m are replaced respectively by the null matrix of pk × pk, the null vector of pk and k. Another target distribution might be the one using g_λ^⋆(x_i)_i=1^n along with a Gaussian prior whose hyperparameters are given by Equation (<ref>): π_targetGP(θ(λ^⋆)|z1^t_k,ϕ̂ ) ∼_pk(θ̅_targetGP(λ^⋆), Σ_targetGP(λ^⋆,λ^⋆^') ). It is derived from Theorem <ref> with m=k and ϕ=ϕ̂. The comparison between the three distributions (predictive and the two targets) will be done via the MSE criterion described in the next section. §.§ Accuracy criterion: MSE The criterion we consider is the mean square error (MSE). It is calculated by integrating over π(λ|w) the difference between the true value θ(λ) and the mean predictor θ̅_pred(λ) (respectively θ̅_target(λ), θ̅_targetGP(λ) for the target distributions): MSE=∫ (θ(λ)-θ̅_pred(λ))^t(θ(λ)-θ̅_pred(λ)) π(λ|w)dλ. In practice, it is approximated by a Monte Carlo estimator: 1/N_λ∑_j=1^N_λ(θ(λ_j)-θ̅_pred(λ_j))^t(θ(λ_j)-θ̅_pred(λ_j)), where (λ_j)_j= 1, ⋯, N_λ are a N_λ-size sample of i.i.d. realizations of λ. The lower the MSE, the more accurate the predictor θ̅_pred(λ). In the numerical examples, we will consider an empirical MSE for each component θ_u(λ) of θ(λ): MSE_u≈1/N_λ∑_j=1^N_λ(θ_u(λ_j)-θ̅_pred_u(λ_j))^2, 1 ≤ u≤ p (and resp. for θ̅_target(λ), θ̅_targetGP(λ)). Afterwards, we can check to which extend the calibrated model {g_λ(x_i)^tθ̅_pred(λ)}_1≤ i≤ n is constant in λ (in other words whether the compensation hypothesis is verified). §.§ A test for checking the compensation hypothesis The compensation hypothesis states that the predictions of the numerical model should be constant in λ. To inspect this hypothesis, we compute the predictive distribution of the numerical model associated to each λ: π(g_λ(x)θ(λ)|,λ,ϕ) ∼_n(g_λ(x)θ̅_pred(λ), g_λ(x)Σ_pred(λ,λ^')g_λ(x)^t). By integrating over the whole uncertainty of λ, we have: π(g_λ(x)θ(λ)|,ϕ) =∫π(g_λ^'(x)θ(λ^')|,ϕ, λ^') π(λ^'|w)dλ^'. Using the cross-validation technique <cit.>, we can derive for any configuration x_i, the predictive distribution of the associated numerical model for any λ, for 1≤ i≤ n: π(g_λ(x_i)^tθ(λ)|_-i,λ,ϕ) ∼(g_λ(x_i)^t θ̅_pred, -i(λ), g_λ(x_i)^tΣ_pred, -i(λ, λ)g_λ(x_i)), where _-i is equal to without the column i, θ̅_pred, -i and Σ_pred, -i are given by Theorem <ref> where is replaced by _-i. If the compensation hypothesis is achieved by the numerical model, then for any λ_1 λ_2, the distributions π(g_λ_1(x_i)^tθ(λ_1)|_-i,λ_1), π(g_λ_2(x_i)^tθ(λ_2)|_-i,λ_2) and π(g_λ(x_i)^tθ(λ)|_-i,ϕ) are expected to be similar to each other. Thus the predictive credibility interval of the distribution of the random variable g_λ_1(x_i)^tθ(λ_1) -g_λ(x_i)^tθ(λ_2)|_-i,λ_1,λ_2, ϕ is likely to cover 0. This random variable follows a normal distribution of mean and variance given respectively by: μ_i(λ_1,λ_2) := g_λ_1(x_i)^t θ̅_pred, -i(λ_1)-g_λ_2(x_i)^t θ̅_pred, -i(λ_2), σ^2_i(λ_1,λ_2) := g_λ_1(x_i)^tΣ_pred, -i(λ_1, λ_1)g_λ_1(x_i)- 2g_λ_1(x_i)^tΣ_pred, -i(λ_1, λ_2)g_λ_2(x_i) +g_λ_2(x_i)^tΣ_pred, -i(λ_2, λ_2)g_λ_2(x_i). From this distribution, we can compute an empirical coverage probability for 0 for 1-α level, from N different i.i.d. sample pairs (λ_1, λ_2)∼π(λ|w)×π(λ|w): Δ̂(α,x_i) =1/N∑_j=1^N 1_{0 ∈[ μ_i(λ^j_1,λ^j_2) ± q_1-α/2√(σ^2_i(λ^j_1,λ^j_2))] } where q_1-α/2 is the (1-α/2) quantile of the standard Gaussian distribution. The objective is to obtain Δ̂(α,x_i) as close as possible to the theoretical probability of interval 1-α. For instance, considering a standard level α=5%, if the coverage probability Δ̂(5%,x_i) is lower than 95%, then the compensation hypothesis must be questioned. § NUMERICAL EXAMPLES §.§ Example 1 in 1D (theta ) We consider the following probabilistic model: z_i = r +ϵ_i, 1≤ i≤ n, where ϵ_i ∼(0, 2) and the chosen r system is assumed to be constant and equal to 5. We model it by y_θ,λ = λθ, where π(λ|w) ∼𝒰[1, 10], Equation (<ref>) can thus be rewritten as, z_i = λθ + ϵ_i, 1≤ i≤ n. The prior distribution on Θ_m is chosen with: * a Matérn 5/2 covariance function given by Equation (<ref>), * a constant mean function: m_β(λ)=β. A Latin hypercuble sampling (LHS) <cit.> is used to sample λ∼π(λ|w) and generate the design D_m. The GP hyperparameters ϕ=(β,ψ, σ^2) are estimated by marginal likelihood maximization (Equation (<ref>)). Appendix <ref> details this estimation. Once ϕ is estimated and denoting ϕ̂ its estimator, the predictive distribution π_pred(θ(λ^⋆)|z1^t,ϕ̂) is computed for a vector λ^⋆∈ [1, 10]^k with k new realizations, and then compared to the target distribution π_target(θ(λ^⋆)|z1^t_k). Figure <ref> presents the results obtained with n=50 i.i.d. samples generated by Equation (<ref>) , m=10 and k=500. On this figure, we have printed the true function θ(λ)=r/λ as well as the means of the predictive and target distributions denoted respectively by θ̅_pred(λ) and θ̅_target(λ). The 95% credibility intervals associated with these two distributions are also represented. It can be seen that the predicted mean θ̅_pred(λ) is relatively close to both the true and target functions and that the predicted credibility intervals of 95% cover them relatively well. We have not plotted θ̅_targetGP(λ) (respectively its credibility interval of 95%). In fact, we can easily notice that θ̅_targetGP(λ) and Σ_targetGP(λ,λ) are very close to θ̅_target(λ) and Σ_target(λ,λ) respectively because the estimated variance σ̂^2 of the GP is greater than Σ_target(λ,λ). These quantities are related by Σ_targetGP(λ,λ)= Σ_target(λ,λ)/1+Σ_target(λ,λ)/σ̂^2≈σ^2_ϵ/nλ^2 θ̅_targetGP(λ) =σ^2_ϵ/σ^2_ϵ + nλ^2σ̂^2β̂ + z̅/λ1/1 + σ^2_ϵ/nλ^2σ̂^2≈θ̅_target(λ) where Σ_target(λ,λ)= σ^2_ϵ/nλ^2 = 2/nλ^2, θ̅_target(λ)=z̅/λ, σ̂^2 = 8.72, β̂= 2.98. For a given vector λ^⋆∈ [1, 10]^N_λ with N_λ=1000 new i.i.d. realizations of λ, we have computed the empirical MSE of the following estimators: * θ̅_pred(λ^⋆) the mean of the predictive distribution given by Theorem <ref> with p=1, * θ̅_target(θ(λ^⋆)|z) the mean of the target distribution with Jeffreys prior given by <ref> with p=1, * θ̅_targetGP(λ^⋆) the mean of the target distribution with a Gaussian prior on θ(λ^⋆) whose hyperparameters are estimated from D_m and z. It is given by Equation (<ref>) with p=1. This procedure is randomly replicated 100 times for two sample sizes n ∈{50,100} of observations z and three sample sizes m ∈{10,15,20} of D_m. Each replication is performed from independent samples of z and independent LHS designs D_m. Figure <ref> shows the boxplots of the empirical MSE thus obtained. For both sample sizes n, two phenomena are observed as m increases: * the boxplot of the empirical MSE of θ̅_pred(λ^⋆) comes close to that of θ̅_targetGP(λ^⋆), * the results obtained for the two target distributions but calculated with two different priors differ little. The impact of the prior choice is therefore limited in this case, mostly due to the use of empirical Bayes. §.§ Example 2 in 2D ... We consider the following analytical example: * an additive physical system of interest: r(x)= r_1(x) +r_2(x) avec r_1(x) r_2(x) ∀ x ∈ [-4, 4], * the experimental data are linked to the system via: z_i = r(x_i)+ ϵ_i, -4 ≤ x_i ≤ 4, i=1,⋯, n, ϵ_i ∼(0, σ^2_ϵ_i), σ^2_ϵ_i:= (0.06|r(x_i)|)^2. We postulate the following linear numerical model, supposed to represent r(x) and verifying the compensation hypothesis: r(x_i)=g_λ(x_i)^tθ(λ), θ(λ) ∈2, where the components of θ(λ) are given by: θ_1(λ)=λsin(10λ) +1, θ_2(λ)= sin(2πλ/10)+0.2sin(20πλ/2.5) +1.75, and π(λ|w)∼𝒰[0, 1]. The two functions θ_l(λ)_l={1, 2} are chosen from <cit.>. We need to construct g_λ(x_i) in such a way that the numerical model g_λ(x_i)^tθ(λ) remains unchanged in λ. To do this, we propose to set: g_1(x_i, λ)= r_1(x_i)/θ_1(λ)=x_i^2 + x_i +1/θ_1(λ) and g_2(x_i,λ) = r_2(x_i)/θ_2(λ)=x_i^2 +x_i + 4/θ_2(λ). With this choice, the compensation hypothesis is verified and moreover, the matrix Δ of Theorem <ref> is invertible. The prior distribution on Θ_m is chosen from two GP prior for θ_1(λ) and θ_2(λ) with for each: * a Matérn 5/2 covariance function given by Equation (<ref>), * a constant mean function: m_β_l(λ) = β_l, l ∈{1, 2}. Once ϕ=(β_l,σ^2_l, ψ_l)_1≤ l≤ 2 is estimated, the predictive distribution π_pred(θ(λ^⋆)|, ϕ̂) is computed for a vector λ^⋆∈ [1, 10]^k with k=500 new realizations, then compared to the target distribution π_target(θ(λ^⋆)|z1^t_k). Figures <ref> (a) and (b) present respectively the two components θ_1(λ) et θ_2(λ) of the true function θ(λ) as well as the two means of both the predictive and target distributions calculated with a sample of n=50 i.i.d. observations generated by Equation (<ref>) and a design D_m of size m=10. The 95% credibility intervals cover relatively well the two components of the true function ((a) for θ_1(λ) and (b) for θ_2(λ)). We also observe that the marginal predictors are able to approximate well the components of the true function and those of the target function. Figure <ref> presents the comparison between the marginal predictive and target densities for a specific λ^⋆, equal to 0.45. For this λ realization, the three densities are in good agreeement. For a given vector λ^⋆∈ [1, 10]^N_λ with N_λ=1000 new i.i.d. realizations of λ, we have computed the empirical MSE of the following estimators: * θ̅_pred(λ^⋆) the mean of the predictive distribution given by Theorem <ref> with p=2, * θ̅_target(λ^⋆) the mean of the target distribution with Jeffreys prior given by Equation (<ref>) with p=2, * θ̅_targetGP(λ^⋆) the mean of the target distribution with a Gaussian prior on θ(λ^⋆) whose hyperparameters are estimated from D_m and z. It is given by Equation (<ref>) with p=2. This procedure is randomly replicated 100 times for two sample sizes n ∈{50,100} of observations z and three sample sizes m ∈{10,15,20} of D_m. Results are given by Figures <ref> and <ref>, for θ_1(λ) and θ_2(λ) respectively. We can see that for m=10, the GP-LinCC method predicts better the first component than the second one. This could be due to the shape of these two components. However, when m increases, it predicts well the components of θ(λ). §.§ Example 3 in 1D with falsification of the compensation hypothesis In this example, we will test the GP-LinCC method in the case where the compensation hypothesis is violated. The sampling of experimental data now depends on λ. We have chosen a specific sample λ_0 such that: z_i = r_λ_0(x_i) + ϵ_i, 1≤ i≤ n, z_i = g_λ_0(x_i)^tθ(λ_0) + ϵ_i, 1≤ i≤ n, with r_λ not being equal to r_λ_0 if and only if λ is not equal to λ_0. For any λ, the observations z can actually be related to the numerical model outputs via: z_i = g_λ(x_i)^tθ(λ)+ b(x_i,λ) + ϵ_i, 1≤ i≤ n, where the discrepancy term b(x_i, λ) is equal to: b(x_i,λ) = g_λ_0(x_i)^tθ(λ_0)-g_λ(x_i)^tθ(λ) =g_λ(x_i)^t ( g_λ_0(x_i)^tθ(λ_0)/g_λ(x_i)^tθ(λ)-1)θ(λ) =(α(x_i,λ)-1) g_λ(x_i)^tθ(λ), where α(x_i,λ)=g_λ_0(x_i)^tθ(λ_0)/g_λ(x_i)^tθ(λ)=r_λ_0(x_i)/r_λ(x_i). Equation (<ref>) becomes: z_i =g_λ(x_i)^t α(x_i,λ)θ(λ) +ϵ_i, 1≤ i≤ n. As α(x_i,λ) is neglected by the GP-LinCC method, the mean estimators θ̅_pred(λ) are equal to θ̅_target(λ)=(g_λ(x)^tg_λ(x))^-1g_λ(x)^tz. By propagating these estimators through the model, we obtain the following calibrated predictions that depend on λ: r_λ(x_i):=g_λ(x_i)^t θ̅_pred(λ), 1≤ i≤ n ⟹r_λ(x) =g_λ(x)θ̅_pred(λ). For our test case, we choose the following physical system r_λ(x_i)= 3x_i^2+ 2λ^2 x_i+1+ λ, -2≤ x_i≤2, π(λ|w) ∼𝒰[0, 1] and the numerical model g_λ(x)θ(λ) such that g_λ(x_i)=r_λ(x_i)/θ(λ), θ(λ)=1 +λsin(10λ). The i.i.d. observations z=(z_1,⋯,z_n)^t are generated with λ_0=0.5 and the variance σ^2_i=0.06(r_λ_0(x_i))^2 for 1≤ i≤ n by Equation (<ref>). Once ϕ=(β_l,σ^2_l, ψ_l)_1≤ l≤ 2 is estimated, the predictive distribution is computed for a vector λ^⋆∈ [1, 10]^k with k=500 new realizations, then compared to the target distribution π_target(θ(λ^⋆)|z1^t_k). Figure <ref> presents the true function θ(λ), the predicted function θ̅_pred(λ) and the target function θ̅_target(λ) computed from a sample of z and a design D_m of size m=10, as well as the associated 95% credibility intervals. We can see that the predicted function does not match the true function whereas, as expected, it approximates well the target function. The predictors r_λ(x) for different λ and the physical system r_λ_0(x) are plotted in Figure <ref> where we can observe that these predictors vary with the chosen value of λ. This reveals that the calibrated model does not compensate in λ. This absence of compensation is also confirmed by the predictive distributions π(g_λ_j(x_i)^tθ(λ)|_-i,λ_j,ϕ̂), x_i ∈{x_1,x_4} for 1 ≤ j≤ 4 as presented in Figure <ref>. The empirical coverage probabilities Δ̂(5%, x_i), computed with N=5000 pairs of samples (λ_1,λ_2), presented in Figure <ref> do not exceed the threshold of 95%, revealing once again that the compensation hypothesis is not satisfied. These results are coherent with the simulation procedure of the observations z. Indeed, since z has been simulated with a nominal value λ_0=0.5, the predicted function θ_pred(λ) is not able to predict well the true function for new realizations of λ not close to λ_0. This can be observed in Figure <ref>. The GP-LinCC method based on both linearization and Gaussian processes has allowed us to perform a joint calibration of the calibration parameters θ(λ_j) for a set of training samples λ_j ∈ D_m. From this calibration, we have derived a predictive distribution for some new realizations λ^⋆ whose mean is the predictor of θ(λ^⋆) and covariance matrix is the predictive covariance matrix between the components of θ(λ^⋆). The accuracy of the predictive mean function θ̅_pred(λ) directly depends on both the size of the design D_m and the number of observations n. § CONCLUSION In this paper, we have proposed a new method to address the problem of conditional calibration in the framework of two chained numerical models. The parameters θ of the second model of the chain are calibrated conditionally on the parameters λ of the first model whose probability distribution is known. By leveraging some modeling assumptions consistent with the targeted application framework, the new method GP-LinCC, offers an analytical resolution, avoiding the use of a MCMC algorithm. The GP-LinCC method can learn the relationship between θ and λ via the calibration function θ(λ) from available experimental data and from a numerical design of the model for a set of λ. To do so, it relies on three hypotheses. First, the prior distribution for θ(λ) is assumed to be a Gaussian process of given mean and covariance functions (but whose parameters are to be estimated). Then, a compensation hypothesis supported by expert opinion is made to ensure the consistency of the calibration problem. This hypothesis states that the experimental data provide negligible information on the uncertainty of λ and, as a result, on the corresponding simulated quantities. Finally, the last assumption states that the output of the second model is linear in θ(λ). Under these three hypotheses, we have demonstrated that the GP-LinCC method provides a Gaussian predictive probability distribution of θ(λ^⋆) for any new set of realizations λ^⋆. We have implemented this method on analytical examples for small dimensions of the parameters θ and the results obtained are convincing. Moreover, we propose a way to assess the accuracy of the predictive distribution by comparing it with a target distribution (unknown in practice but computable for analytical test cases). The latter corresponds to the posterior distribution that we would obtain if we knew the linearization coefficients of the model in λ^⋆ while assuming a Jeffreys prior on θ(λ^⋆). The predictive distribution has also been compared to another target distribution obtained with a Gaussian prior whose hyperparameters are estimated on the numerical design for λ. Further work is planned to apply the GP-LinCC method to the fuel application in pressurized water reactors which has motivated this methodological work: namely the calibration of the parameters of the gas behavior model conditionally on the thermal conductivity λ. However, a preliminary sensitivity analysis must be done before deploying the GP-LinCC method to this physical problem. Indeed, the large dimension of θ (more than ten parameters) requires a pre-selection of the most important parameters. To achieve this, we propose to carry out a Global sensitivity analysis can be carried out in order to identify the most influencing parameters to be calibrated. More precisely, we propose to use the multivariate version of the sensitivity indices based on the Hilbert-Schmidt independence criterion (HSIC). First introduced in <cit.> and recently, HSIC extensions for multivariate and functional outputs have then been proposed in <cit.>. Finally, the parameters θ may have bounded variation ranges related to their physical meaning. It would be necessary to incorporate such bound constraints in the GP-LinCC method to guarantee that the results make physical sense. The predictive distribution provided by the GP-LinCC method will thereby transform into a truncated multivariate normal distribution <cit.>. § ACKNOWLEDGMENTS This work has been partly funded by the tripartite project devoted to Uncertainty Quantification, consisting of French Alternative Energies and Atomic Energy Commission (CEA), Electricity of France (EDF) and Framatome (FRA). We would like to acknowledge Merlin Keller, research engineer at EDF R&D, who provided us with information on cut-off models. This was a great help in the mathematical formalization of the conditional calibration problem. § SOME USEFUL MATHEMATICAL RESULTS equationsubsection §.§ Sherman-Morrisson formula Let A and B ∈n × n be two invertible matrices. Then we have: (A^-1+ B^-1)^-1= A- A(A+B)^-1A = B- B(A+B)^-1B. §.§ Woodbury-Sherman-Morrisson identity Let Z∈n × n, W∈ m × m, U and V ∈n× m be matrices. Suppose that Z and W are invertible. Then, the Woodbury identity and its associated determinant are: (Z + U W V^t)^-1 = Z^-1 -Z^-1U(W^-1 +V^tZ^-1U)^-1V^tZ^-1, |Z+ UWV^t| = |Z||W||W^-1+ V^tZ^-1U|. §.§ Vectorization Vectorization is an operator that transforms any matrix A ∈ m× p into a column vector denoted A⃗∈mp. This operation consists in stacking the components of A successively, from the first to the last column of A. For example, A =[ a b; c d ]∈ 2 × 2⇒A⃗= [ a; c; b; d ]∈ℝ^4. §.§ Gaussian process A Gaussian process (GP) is a collection of random variables, any finite number of which is a realization of a multivariate normal distribution. The basic idea of GP regression is to consider that the available observations (or realizations) of a variable of interest can be modeled by a GP prior. Suppose that we have m observations Θ_m=(θ(λ_j))_1≤ j≤ m of the variable of interest which are realizations of a GP prior where θ(λ_j)∈ℝ, λ_j ∈ℝ^q (q ≥ 1) and D_m=(λ_1,⋯, λ_m)^t is a numerical design. A GP is fully defined by its mean function m_β(λ) and its covariance function σ^2K_ψ(λ, λ^'). The predictive GP distribution is therefore given by the GP conditioned by the known observations Θ_m. More precisely, this conditional distribution for any new set of λ^⋆=(λ^⋆_1,⋯, λ^⋆_k)^t can be obtained analytically from the following joint distribution: [ θ(λ^⋆); Θ_m ]∼_m + k( [ m_β(λ^⋆); m_β(D_m) ], σ^2[ K(λ^⋆,λ^⋆) K(λ^⋆, D_m)^t; K(λ^⋆,D_m) K(D_m,D_m) ]), where * m_β(D_m)=(m_β(λ_1),⋯, m_β(λ_m))^t∈m is the mean vector of GP evaluated in each location of D_m, * K(D_m,D_m)=(K_ψ(λ_i,λ_j))_1≤ i,j≤ m∈ℝ^m × m is the correlation matrix at sample of D_m, * K(λ^⋆,D_m)∈ℝ^k × m is the correlation matrix between λ^⋆ and D_m. By applying the conditioning formula of Gaussian vectors to the above joint distribution, we obtain that the conditional vector θ(λ^⋆)|Θ_m is still a GP characterized by its mean given by: θ(λ^⋆):=(θ(λ^⋆)|Θ_m,β, σ^2,ψ)= m_β(λ^⋆) + K(λ^⋆,D_m)K(D_m,D_m)^-1(Θ_m - m_β(D_m)), and its covariance function: Σ_pred(λ^⋆,λ^⋆^'):= σ^2(K(λ^⋆,λ^⋆)-K(λ^⋆,D_m)K(D_m,D_m)^-1K(D_m,λ^⋆^')), where the GP hyperparameters ϕ:=(β, σ^2,ψ) are not known in practice and have to be estimated. This estimation can be done by marginal likelihood maximization. § PROOF OF THE RESULTS OF SECTION <REF> §.§ Proof of Theorem <ref> From Bayes rule, we have: π(Θ_m |, ϕ)=(|Θ_m)π(Θ_m|ϕ)/∫(|Θ_m)π(Θ_m|ϕ) dΘ_m. We rewrite the likelihood function as a function of Θ⃗_m and we obtain: (|Θ_m) = 1/√(2π)^nm |Σ_ϵ|^m/2mj=1∏exp(-1/2(z -g_λ_j(x)θ(λ_j))^t Σ_ϵ^-1(z -g_λ_j(x)θ(λ_j))) ∝exp-1/2( m z^tΣ_ϵ^-1z -2 ∑_j=1^m z^tΣ_ϵ^-1 g_λ_j(x)θ(λ_j) + ∑_j=1^mθ(λ_j)^tg_λ_j(x)^t Σ_ϵ^-1 g_λ_j(x)θ(λ_j) ) =√(2π)^-nm/|Σ_ϵ|^m/2exp-1/2( m z^tΣ_ϵ^-1z -2 z^t Σ_ϵ^-1GΘ⃗_m + Θ⃗_m^t Δ^-1Θ⃗_m ), where G = (g_λ_1(x),⋯,g_λ_m(x)) ∈n × pm, Δ^-1 =diag(g_λ_1(x)^tΣ_ϵ^-1g_λ_1(x),⋯ g_λ_m(x)^tΣ_ϵ^-1g_λ_m(x))∈pm × pm. Then, we have: (|Θ_m) π(Θ_m|ϕ)= √(2π)^-nm/|Σ_ϵ|^m/2exp-1/2( m z^tΣ_ϵ^-1z -2 z^t Σ_ϵ^-1GΘ⃗_m + Θ⃗_m^t Δ^-1Θ⃗_m ) × 1/√(2π)^pm |_ϕ|^1/2exp-1/2(Θ⃗_m -M⃗_β)^t_ϕ^-1(Θ⃗_m -M⃗_β). By grouping the terms in Θ⃗_m, we obtain: (|Θ_m) π(Θ_m|ϕ)= √(2π)^-m(n+p)|_ϕ|^-1/2/|Σ_ϵ|^m/2exp-1/2(m z^tΣ_ϵ^-1z + M⃗_β^t _ϕ^-1M⃗_β) × exp-1/2[ -2 (_ϕ^-1M⃗_β + G^tΣ_ϵ^-1z )^tΘ⃗_m +Θ⃗_m^t(Δ^-1+_ϕ^-1)Θ⃗_m ] . We conclude: π(Θ_m |, ϕ) ∝exp-1/2[-2 (_ϕ^-1M⃗_β + G^tΣ_ϵ^-1z )^tΘ⃗_m + Θ⃗_m^t(Δ^-1+_ϕ^-1)Θ⃗_m] ⇒π(Θ_m |, ϕ) ∼_pm([Θ_m|,ϕ], Σ_ϕ), where [Θ_m|,ϕ] =_ϕ( _ϕ^-1M_β + G^tΣ_ϵ^-1z) ∈pm, _ϕ =( Δ^-1+^-1_ϕ)^-1∈pm × pm. §.§ Proof of Theorem <ref> We consider any new set of realizations λ^⋆=(λ_1^⋆,⋯, λ_k^⋆)^t and the predictive distribution associated to θ(λ^⋆) is obtained by integrating the Gaussian conditional distribution π(θ(λ^⋆)|Θ_m,ϕ) over the posterior probability measure π(Θ_m|, ϕ)dΘ_m: π_pred(θ(λ^⋆)|, ϕ) = ∫π(θ(λ^⋆)|Θ_m,ϕ) π(Θ_m|, ϕ) dΘ_m, with π(θ(λ^⋆)|Θ_m,ϕ)∼_p k(μ⃗_cond(λ^⋆), Σ_cond(λ^⋆,λ^⋆^')) where μ⃗_cond(λ^⋆)= m⃗_β(λ^⋆) + C(λ^⋆,D_m)_ϕ^-1(Θ⃗_m -M⃗_β) =: μ⃗^⋆ + X^t Θ⃗_m, Σ_cond(λ^⋆,λ^⋆^')=C(λ^⋆,λ^⋆^') - C(λ^⋆,D_m)_ϕ^-1C(D_m,λ^⋆), and X^t:= C(λ^⋆,D_m)_ϕ^-1, μ⃗^⋆ := m⃗_β(λ^⋆) + X^tM⃗_β. Then, we have: π_pred(θ(λ^⋆)|, ϕ) = ∫{√(2π)^-pk/|Σ_cond(λ^⋆,λ^⋆^')|^1/2exp-1/2[ (θ⃗(λ^⋆)-(μ⃗^⋆-X^tΘ⃗_m))^t Σ_cond(λ^⋆,λ^⋆^')^-1 ×(θ⃗(λ^⋆)-(μ⃗^⋆+X^tΘ⃗_m)) ] √(2π)^-pm/|_ϕ|^1/2exp-1/2[(Θ⃗_m- [Θ_m|,ϕ])^t _ϕ^-1(Θ⃗_m -[Θ_m|,ϕ]) ] dΘ_m}. By setting θ⃗^⋆:=θ⃗(λ^⋆)-μ⃗^⋆ and by expanding the expression in the integral while grouping the terms in Θ⃗_m, we obtain: π_pred(θ(λ^⋆)|, ϕ) = ∫{√(2π)^-p(k+m)/|Σ_cond(λ^⋆,λ^⋆^')|^1/2|_ϕ|^1/2exp-1/2(θ⃗^⋆^tΣ_cond(λ^⋆,λ^⋆^')^-1θ⃗^⋆ + [Θ_m|,ϕ]^t_ϕ^-1[Θ_m|,ϕ] ) ×exp-1/2[-2(θ⃗^⋆^tΣ_cond(λ^⋆,λ^⋆^')^-1X^t +[Θ_m|,ϕ]^t_ϕ^-1)Θ⃗_m + Θ⃗_m^t(_ϕ^-1 +XΣ^cond(λ^⋆,λ^⋆^')^-1X^t) ] dΘ_m}. The expression of π_pred(θ(λ^⋆)|, ϕ) becomes: π_pred(θ(λ^⋆)|, ϕ) =√(2π)^-pk/|Σ_cond(λ^⋆,λ^⋆^')|^1/2|_ϕ|^1/2exp-1/2(θ⃗^⋆^tΣ_cond(λ^⋆,λ^⋆^')^-1θ⃗^⋆ + [Θ_m|,ϕ]^t_ϕ^-1[Θ_m|,ϕ] ) ×{∫1/√(2π)^pmexp-1/2[-2(θ⃗^⋆^tΣ_cond(λ^⋆,λ^⋆^')^-1X^t + [Θ_m|,ϕ]^t_ϕ^-1)Θ⃗_m + Θ⃗_m^t(_ϕ^-1 +XΣ_cond(λ^⋆,λ^⋆^')^-1X^t) ]dΘ_m}. Using the integration formula for multivariate Gaussian densities, the expression {⋯} becomes: |(_ϕ^-1 +XΣ_cond(λ^⋆,λ^⋆^')^-1X^t)^-1|^1/2exp1/2[ ( XΣ_cond(λ^⋆,λ^⋆^')^-1θ⃗^⋆ + _ϕ^-1[Θ_m|,ϕ] )^t × (_ϕ^-1 +XΣ_cond(λ^⋆,λ^⋆^')^-1X^t)^-1( XΣ_cond(λ^⋆,λ^⋆^')^-1θ⃗^⋆ + _ϕ^-1[Θ_m|,ϕ] ) ]. Therefore, we can write: π_pred(θ(λ^⋆)|, ϕ) =√(2π)^-pk/|Σ_cond(λ^⋆,λ^⋆^')|^1/2|_ϕ|^1/2exp-1/2(θ⃗^⋆^tΣ_cond(λ^⋆,λ^⋆^')^-1θ⃗^⋆ + [Θ_m|,ϕ]^t_ϕ^-1[Θ_m|,ϕ] ) ×{ |(_ϕ^-1 +XΣ_cond(λ^⋆,λ^⋆^')^-1X^t)^-1|^1/2 exp1/2[ ( XΣ_cond(λ^⋆,λ^⋆^')^-1θ⃗^⋆ + _ϕ^-1[Θ_m|,ϕ] )^t × (_ϕ^-1 +XΣ_cond(λ^⋆,λ^⋆^')^-1X^t)^-1( XΣ_cond(λ^⋆,λ^⋆^')^-1θ⃗^⋆ + _ϕ^-1[Θ_m|,ϕ] ) ] }. By (<ref>), we have: |_ϕ^-1 +XΣ_cond(λ^⋆,λ^⋆^')^-1X^t| =|_ϕ^-1| |Σ_cond(λ^⋆,λ^⋆^')^-1| |Σ_cond(λ^⋆,λ^⋆^') + X^t_ϕ X |. By replacing it in (<ref>), we obtain: π_pred(θ(λ^⋆)|, ϕ) =√(2π)^-pk/|Σ_cond(λ^⋆,λ^⋆^') + X^t_ϕ X |^1/2exp-1/2(θ⃗^⋆^tΣ_cond(λ^⋆,λ^⋆^')^-1θ⃗^⋆ + [Θ_m|,ϕ]^t_ϕ^-1[Θ_m|,ϕ] ) exp1/2{( XΣ_cond(λ^⋆,λ^⋆^')^-1θ⃗^⋆ + _ϕ^-1[Θ_m|,ϕ] )^t× (_ϕ^-1 +XΣ_cond(λ^⋆,λ^⋆^')^-1X^t)^-1( XΣ_cond(λ^⋆,λ^⋆^')^-1θ⃗^⋆ + _ϕ^-1[Θ_m|,ϕ] ) }. We expand the expression {⋯} in (<ref>): θ⃗^⋆^t [Σ_cond(λ^⋆,λ^⋆^')^-1 X^t (_ϕ^-1 +XΣ_cond(λ^⋆,λ^⋆^')^-1X^t)^-1 XΣ_cond(λ^⋆,λ^⋆^')^-1]θ⃗^⋆ + 2 θ⃗^⋆^t [Σ_cond(λ^⋆,λ^⋆^')^-1 X^t (_ϕ^-1 +XΣ_cond(λ^⋆,λ^⋆^')^-1X^t)^-1_ϕ^-1][Θ_m|,ϕ] + [Θ_m|,ϕ]^t{_ϕ^-1(_ϕ^-1 +XΣ_cond(λ^⋆,λ^⋆^')^-1X^t)^-1_ϕ^-1}[Θ_m|,ϕ]. By the Woodbury identity (<ref>) (with Z=Σ^cond(λ^⋆,λ^⋆^'), W=_ϕ, U=V=X^t), the expression [⋯] is equal to: Σ_cond(λ^⋆,λ^⋆^')^-1- (Σ_cond(λ^⋆,λ^⋆^')+ X^t_ϕ X )^-1. The expression [⋯] becomes: Σ_cond(λ^⋆,λ^⋆^')^-1 X^t( I_pm + _ϕ X Σ_cond(λ^⋆,λ^⋆^')^-1 X^t)^-1(*)=Σ_cond(λ^⋆,λ^⋆^')^-1 X^t [ I_pm- _ϕ X ×(I_pk +Σ_cond(λ^⋆,λ^⋆^')^-1 X^t _ϕ X )^-1Σ_cond(λ^⋆,λ^⋆^')^-1 X^t]= Σ_cond(λ^⋆,λ^⋆^')^-1 X^t ×[ I_pm-_ϕ X (Σ_cond(λ^⋆,λ^⋆^') + X^t _ϕ X )^-1] = [Σ_cond(λ^⋆,λ^⋆^')^-1 - Σ_cond(λ^⋆,λ^⋆^')^-1X^t _ϕ X(Σ_cond(λ^⋆,λ^⋆^') + X^t _ϕ X )^-1] X^t = [Σ_cond(λ^⋆,λ^⋆^')^-1(Σ_cond(λ^⋆,λ^⋆^') + X^t _ϕ X )(Σ_cond(λ^⋆,λ^⋆^') + X^t _ϕ X )^-1 - Σ_cond(λ^⋆,λ^⋆^')^-1X^t _ϕ X(Σ_cond(λ^⋆,λ^⋆^') + X^t _ϕ X )^-1] X^t. Therefore, the above equality becomes: [ Σ_cond(λ^⋆,λ^⋆^')^-1(Σ_cond(λ^⋆,λ^⋆^') + X^t _ϕ X )- Σ_cond(λ^⋆,λ^⋆^')^-1X^t _ϕ X] × (Σ_cond(λ^⋆,λ^⋆^') + X^t _ϕ X )^-1 X^t =[I_pk+Σ_cond(λ^⋆,λ^⋆^')^-1X^t _ϕ X - Σ_cond(λ^⋆,λ^⋆^')^-1X^t _ϕ X ](Σ_cond(λ^⋆,λ^⋆^') + X^t _ϕ X )^-1 X^t = (Σ_cond(λ^⋆,λ^⋆^') + X^t _ϕ X )^-1 X^t. Note that in step (*)= of (<ref>), we used (<ref>) with Z=I_pm, W=I_pk, U=_ϕ X and V^t=Σ_cond(λ^⋆,λ^⋆^')^-1X^t. By applying the Woodbury identity (<ref>) with Z=_ϕ^-1, W=Σ_cond(λ^⋆,λ^⋆^')^-1, U=V=X, the expression {⋯} of (<ref>) becomes: _ϕ^-1[_ϕ -_ϕ X (Σ_cond(λ^⋆,λ^⋆^') + X^t _ϕ X )^-1 X^t _ϕ]_ϕ^-1= _ϕ^-1- X(Σ_cond(λ^⋆,λ^⋆^') + X^t _ϕ X )^-1 X^t. Therefore, the expression {⋯} of (<ref>) is equal to: θ⃗^⋆^t [Σ_cond(λ^⋆,λ^⋆^')^-1- (Σ_cond(λ^⋆,λ^⋆^') + X^t _ϕ X )^-1] + 2θ⃗^⋆^t [(Σ_cond(λ^⋆,λ^⋆^') + X^t _ϕ X )^-1 X^t ] ×[Θ_m|,ϕ] +[Θ_m|,ϕ]^t{_ϕ^-1 - X(Σ_cond(λ^⋆,λ^⋆^') + X^t _ϕ X )^-1 X^t }[Θ_m|,ϕ]. The above expression is equal to: θ⃗^⋆^tΣ_cond(λ^⋆,λ^⋆^')^-1θ⃗^⋆ +[Θ_m|,ϕ]^t_ϕ^-1[Θ_m|,ϕ]- (θ⃗^⋆-X^t[Θ_m|,ϕ] )^t(Σ_cond(λ^⋆,λ^⋆^') + X^t _ϕ X )^-1(θ⃗^⋆-X^t[Θ_m|,ϕ] ). As a result, the predictive distribution is given by: π_pred(θ(λ^⋆)|, ϕ) =√(2π)^-pk/|Σ_cond(λ^⋆,λ^⋆^') + X^t_ϕ X |^1/2exp-1/2(θ⃗^⋆^tΣ_cond(λ^⋆,λ^⋆^')^-1θ⃗^⋆ + [Θ_m|,ϕ]^t_ϕ^-1[Θ_m|,ϕ] ) ×exp1/2{θ⃗^⋆^tΣ_cond(λ^⋆,λ^⋆^')^-1θ⃗^⋆ + [Θ_m|,ϕ]^t_ϕ^-1[Θ_m|,ϕ]- (θ⃗^⋆-X^t[Θ_m|,ϕ] )^t ×(Σ_cond(λ^⋆,λ^⋆^') + X^t _ϕ X )^-1(θ⃗^⋆-X^t[Θ_m|,ϕ] ) }. Finally, the predictive distribution becomes: π_pred(θ(λ^⋆)|, ϕ) =√(2π)^-pk/|Σ_cond(λ^⋆,λ^⋆^') + X^t_ϕ X |^1/2exp1/2{ - (θ⃗^⋆-X^t[Θ_m|,ϕ] )^t ×(Σ_cond(λ^⋆,λ^⋆^') + X^t _ϕ X )^-1(θ⃗^⋆-X^t[Θ_m|,ϕ] ) }, which can be rewritten as: π_pred(θ(λ^⋆)|, ϕ) =√(2π)^-pk/|Σ_cond(λ^⋆,λ^⋆^') + X^t_ϕ X |^1/2exp-1/2{(θ⃗(λ^⋆)-μ⃗^⋆-X^t[Θ_m|,ϕ])^t ×(Σ_cond(λ^⋆,λ^⋆^') + X^t _ϕ X )^-1(θ⃗(λ^⋆)-μ⃗^⋆-X^t[Θ_m|,ϕ] ) }. §.§ Marginal likelihood expression and hyperparameters tuning The optimal hyperparameters ϕ̂ are obtained by maximizing the marginal likelihood: ϕ:=_ϕ=(β_l,σ^2_l,ψ_k)^p_l=1∫(|Θ_m)π(Θ_m|ϕ)dΘ_m. §.§.§ Marginal likelihood expression Starting from Equation (<ref>), the above integral is equal to: ∫(|Θ_m)π(Θ_m|ϕ)dΘ_m =√(2π)^-mn|_ϕ|^-1/2/|Σ_ϵ|^m/2exp-1/2(m z^tΣ_ϵ^-1z + M⃗_β^t _ϕ^-1M⃗_β) × (∫1/√(2π)^pmexp-1/2[ -2 (_ϕ^-1M⃗_β + G^tΣ_ϵ^-1z )^tΘ⃗_m +Θ⃗_m^t_ϕ^-1Θ⃗_m ] dΘ_m), where (⋯)= |_ϕ|^1/2exp-1/2( _ϕ^-1M_β + G^tΣ_ϵ^-1z)^t_ϕ( _ϕ^-1M_β + G^tΣ_ϵ^-1z). Therefore, we write: ∫(|Θ_m)π(Θ_m|ϕ) dΘ_m =√(2π)^-mn|_ϕ|^1/2/|Σ_ϵ|^m/2 |_ϕ|^1/2exp-1/2[m z^tΣ_ϵ^-1z + M⃗_β^t _ϕ^-1M⃗_β -{( _ϕ^-1M_β + G^tΣ_ϵ^-1z)^t_ϕ( _ϕ^-1M_β + G^tΣ_ϵ^-1z)}], where {⋯} = M⃗_β^t _ϕ^-1_ϕ_ϕM⃗_β + 2 M⃗_β^t _ϕ_ϕ G^tΣ_ϵ^-1z + z^tΣ_ϵ^-1 G_ϕ G^tΣ_ϵ^-1z. By using the Sherman-Morrisson formula given by (<ref>) (with A=_ϕ^-1 and B=Δ^-1), we obtain: M⃗_β^t _ϕ^-1_ϕ_ϕ^-1M⃗_β = M⃗_β^t( _ϕ^-1 - (Δ+_ϕ)^-1)M⃗_β, 2 M⃗_β^t _ϕ^-1_ϕ G^tΣ_ϵ^-1z = 2 M⃗_β^t _ϕ^-1_ϕΔ^-1Δ G^tΣ_ϵ^-1z=2 M⃗_β^t _ϕ^-1( Δ^-1+^-1_ϕ)^-1Δ^-1 ×Δ G^tΣ_ϵ^-1z = 2 M⃗_β^t(Δ+_ϕ)^-1Δ G^tΣ_ϵ^-1z, and z^tΣ_ϵ^-1 G_ϕ^-1G^tΣ_ϵ^-1z= z^tΣ_ϵ^-1 G ΔΔ^-1(Δ^-1+_ϕ^-1)^-1Δ^-1Δ G^tΣ_ϵ^-1z(⋆)= z^tΣ_ϵ^-1 GΔ ×(Δ^-1- (Δ+_ϕ)^-1)ΔG^tΣ_ϵ^-1z. Note that at step (⋆)=, we use again the Sherman-Morrisson formula given by (<ref>) (with A=Δ^-1 and B=_ϕ^-1). Therefore, we have: {⋯} = M⃗_β^t_ϕ^-1M⃗_β - M⃗_β^t(Δ+_ϕ)^-1M⃗_β + 2 M⃗_β^t(Δ+_ϕ)^-1Δ G^tΣ_ϵ^-1z + z^tΣ_ϵ^-1 GΔG^tΣ_ϵ^-1z - z^tΣ_ϵ^-1 GΔ(Δ+_ϕ)^-1Δ G^tΣ_ϵ^-1z= M⃗_β^t_ϕ^-1M⃗_β + z^tΣ_ϵ^-1 GΔG^tΣ_ϵ^-1z - ( M⃗_β - Δ G^tΣ_ϵ^-1z)^t(Δ+_ϕ)^-1( M⃗_β - Δ G^tΣ_ϵ^-1z). As a result, the marginal likelihood is equal to: ∫(|Θ_m)π(Θ_m|ϕ) dΘ_m =√(2π)^-mn|_ϕ|^-1/2/|Σ_ϵ|^m/2 |_ϕ|^1/2exp-1/2[ ( M⃗_β - Δ G^tΣ_ϵ^-1z)^t × (Δ+_ϕ)^-1( M⃗_β - Δ G^tΣ_ϵ^-1z) ] =exp-[1/2(M⃗_β -ΔG^tΣ_ϵ^-1z )^t× (Δ+ _ϕ) ^-1(M⃗_β -ΔG^tΣ_ϵ^-1z ) ]× |Δ|^1/2/√(2π)^nm|Σ_ϵ|^m/2|Δ+_ϕ |^1/2, because m z^tΣ_ϵ^-1z= z^tΣ_ϵ^-1 GΔG^tΣ_ϵ^-1z. In fact, we have: Σ_ϵ^-1 GΔG^t =Σ_ϵ^-1∑_j=1^m g_λ_j(x)(g_λ_j(x)^tΣ_ϵ^-1g_λ_j(x))^-1g_λ_j(x)^t =∑_j=1^m Σ_ϵ^-1 g_λ_j(x)(g_λ_j(x)^tΣ_ϵ^-1g_λ_j(x))^-1g_λ_j(x)^t = m I_n. The last equality can be proved by contradiction. Indeed, if Σ_ϵ^-1 g_λ_j(x)(g_λ_j(x)^tΣ_ϵ^-1g_λ_j(x))^-1g_λ_j(x)^t I_n, then, we have: g_λ_j(x)^tΣ_ϵ^-1 g_λ_j(x)(g_λ_j(x)^tΣ_ϵ^-1g_λ_j(x))^-1g_λ_j(x)^t g_λ_j(x)^t, which yields the absurd result : g_λ_j(x)^t g_λ_j(x)^t, ∀ j. §.§.§ Hyperparameters tuning Equation (<ref>) gives the optimal hyperparameters: ϕ:=_ϕ=(β_l,σ^2_l,ψ_k)^p_l=1𝐋(ϕ), where 𝐋(ϕ) is the marginal likelihood. Taking the opposite of the marginal log likelihood, we obtain: ϕ̂ = _ϕ (ϕ), where (ϕ)= (M⃗_β -ΔG^tΣ_ϵ^-1z )^t(Δ+ _ϕ) ^-1(M⃗_β -ΔG^tΣ_ϵ^-1z ) -log|Δ| + nm log(2π) + m log|Σ_ϵ| + log|Δ+_ϕ|. Suppose that M⃗_β∈pm is of the following form: M⃗_β = 𝐇β, where β= (β_1,⋯, β_p)^t ∈q with q ≥ p et 𝐇∈pm × q. Then, the optimum in β is given by the following proposition: The optimum in β is equal to: β̂(σ^2,ψ) = ( 𝐇^t(Δ+ _ϕ) ^-1𝐇)^-1𝐇^t (Δ+ _ϕ) ^-1ΔG^tΣ_ϵ^-1z, where σ^2=(σ^2_1,⋯, σ^2_p)^t∈+^p and ψ=(ψ_1,⋯,ψ_p)^t∈q^' with q^'≥ p. Therefore, the optimization problem given by Equation (<ref>) becomes: σ̂^2, ψ̂ = _σ^2,ψ ( β̂(σ^2,ψ),σ^2,ψ) . This optimization problem will be solved by using numerical optimization algorithms. §.§.§ Proof of proposition <ref> Let us derive with respect to β using chain rule derivative: ∂ℓ(β, σ^2,ψ)/∂β = ∂ℓ(β, σ^2,ψ)/∂ (𝐇β- ΔG^tΣ_ϵ^-1z) ×∂ (𝐇β- ΔG^tΣ_ϵ^-1z)/∂β =∂ℓ(β, σ^2,ψ)/∂ (𝐇β-ΔG^tΣ_ϵ^-1z ) ×𝐇 = 2(𝐇β-ΔG^tΣ_ϵ^-1z )^t (Δ+ _ϕ) ^-1𝐇. Then, we have: ∂ℓ(β, σ^2,ψ)/∂β =0 (𝐇β-ΔG^tΣ_ϵ^-1z )^t (Δ+ _ϕ) ^-1𝐇 =0 𝐇^t(Δ+ _ϕ) ^-1𝐇β = 𝐇^t(Δ+ _ϕ) ^-1ΔG^tΣ_ϵ^-1z. abbrvnat
http://arxiv.org/abs/2307.01605v1
20230704094417
Cosmology with fast radio bursts in the era of SKA
[ "Ji-Guo Zhang", "Ze-Wei Zhao", "Yichao Li", "Jing-Fei Zhang", "Di Li", "Xin Zhang" ]
astro-ph.CO
[ "astro-ph.CO", "gr-qc", "hep-ph" ]
Department of Physics, College of Sciences, Northeastern University, Shenyang 110819, China Department of Physics, College of Sciences, Northeastern University, Shenyang 110819, China Department of Physics, College of Sciences, Northeastern University, Shenyang 110819, China Key Laboratory of Cosmology and Astrophysics (Liaoning), Northeastern University, Shenyang 110819, China Department of Physics, College of Sciences, Northeastern University, Shenyang 110819, China Key Laboratory of Cosmology and Astrophysics (Liaoning), Northeastern University, Shenyang 110819, China National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100101, China University of Chinese Academy of Sciences, Beijing 100049, China Research Center for Intelligent Computing Platforms, Zhejiang Laboratory, Hangzhou 311100, China Corresponding author. [email protected] Department of Physics, College of Sciences, Northeastern University, Shenyang 110819, China Key Laboratory of Cosmology and Astrophysics (Liaoning), Northeastern University, Shenyang 110819, China National Frontiers Science Center for Industrial Intelligence and Systems Optimization, Northeastern University, Shenyang 110819, China Key Laboratory of Data Analytics and Optimization for Smart Industry (Ministry of Education), Northeastern University, Shenyang 110819, China We present a forecast of the cosmological parameter estimation using fast radio bursts (FRBs) from the upcoming Square Kilometre Array (SKA), focusing on the issues of dark energy, the Hubble constant, and baryon density. We simulate 10^5 and 10^6 localized FRBs from a 10-year SKA observation, and find that: (i) using 10^6 FRB data alone can tightly constrain dark-energy equation of state parameters better than CMB+BAO+SN, providing a single cosmological probe to explore dark energy; (ii) combining the FRB data with gravitational wave standard siren data from 10-year observation with the Einstein Telescope, the Hubble constant can be constrained to a sub-percent level, serving as a powerful low-redshift probe; (iii) using 10^6 FRB data can constrain the baryon density Ω_ bh to a precision of ∼ 0.1%. Our results indicate that SKA-era FRBs will provide precise cosmological measurements to shed light on both dark energy and the missing baryon problem, and help resolve the Hubble tension. Cosmology with fast radio bursts in the era of SKA Xin Zhang ================================================== § INTRODUCTION Fast radio bursts (FRBs) are millisecond-duration bright radio transients with dispersion measures (DMs) beyond Galactic expectations <cit.>, which remain enigmatic due to their unknown origins and underlying emission mechanisms. Nonetheless, with the high all-sky FRB event rate and advancements of radio astronomy instrument, many fascinating FRB discoveries have been achieved, such as repeaters <cit.>, some of which have periodic activity <cit.>, and in particular, the landmark detection of FRB 200428 associated with a Galactic magnetar SGR 1935+2154 <cit.>, indicating that at least some FRBs can originate from magnetars. Thus far, more than 700 FRBs have been observed (including 29 repeaters) <cit.> largely thanks to the splendid release of the first catalogue <cit.> from the Canadian Hydrogen Intensity Mapping Experiment Fast Radio Burst project (CHIME/FRB) <cit.>. More than a dozen of FRBs have been located within individual galaxies <cit.>. These facts show a growing trend that FRBs can serve as a powerful cosmological probe. By employing FRBs, one can establish a relation between dispersion measure and redshift, which is like the distance–redshift relation, because in this relation DM serves as a proxy of distance <cit.>. The observed DM encodes the content of free electrons along the line of sight. The dominant part in DM comes from the contribution of the intergalactic medium (IGM), denoted as DM_IGM, and this quantity is related to cosmology <cit.>. If FRBs can be localized, i.e., their host galaxies can be found and the redshifts of them can be measured, then the localized FRBs can be used as a cosmological probe through the Macquart relation (i.e., the DM_IGM–z relation) <cit.>. Using localized FRBs, substantial cosmological studies have been done, including probing the baryon fraction of the IGM <cit.>, strong gravitational lensing <cit.>, testing the equivalence principle <cit.>, probing the reionization history <cit.> and other cosmological applications <cit.>. For recent reviews, see Refs. <cit.>. The Square Kilometre Array (SKA), as the upcoming largest high-sensitivity radio telescope <cit.>, promises detection of FRBs several orders of magnitude larger than the current sample size in the next decades <cit.>. Obviously, it is expected that some important issues in cosmology would be solved with the help of the plentiful FRBs in the era of SKA. Dark energy is one of the most difficult theoretical issues in fundamental physics and cosmology. To elucidate the nature of dark energy, the most crucial step is to precisely measure the equation of state (EoS) of dark energy, which can be achieved by precisely measuring the expansion history of the universe. The Λ cold dark matter (ΛCDM) model has long been viewed as the standard model of cosmology, in which the cosmological constant Λ with the EoS w=-1 serves as dark energy <cit.>. The precise measurements on anisotropies in the cosmic microwave background (CMB) by the Planck satellite strongly favor the six-parameter ΛCDM cosmology <cit.>. However, the CMB is an early-universe probe, and so when a dynamical dark energy (with EoS unequal to -1 and usually evolving) is considered, the extra parameters describing EoS cannot be effectively constrained by CMB <cit.>. Therefore, developing independent late-universe cosmological probes becomes rather important for exploring the nature of dark energy. Usually, the late-universe observations such as baryon acoustic oscillations (BAO) and type Ia supernovae (SN) themselves cannot solely tightly constrain dark energy, but they serve as complementary tools to help break the cosmological parameter degeneracies inherent in the CMB <cit.>. Nevertheless, the cosmological tensions between the early and late universe appeared in recent years, such as the so-called “Hubble tension” <cit.>. In addition to searching for new physics <cit.>, forging late-universe probes that can precisely measure cosmological parameters (in particular related to dark energy) is extremely important for current cosmology <cit.>. In addition to the ongoing and upcoming large optical surveys, other-type late-universe observations may prove to be promising to explore dark energy. For example, the recent intensive discussions on gravitational-wave (GW) standard sirens <cit.> show that GWs as a new messenger may offer a novel useful tool for probing the expansion history of the universe <cit.> (see Ref. <cit.> for a recent review). In addition, taking neutral hydrogen as a new tracer of the total matter in the universe through observing their 21-cm emissions (in particular, by the intensity mapping method <cit.>) to map the three-dimensional distribution of the matter is hopeful to play a crucial role in studying cosmology. In addition to these, it has been shown in Refs. <cit.> that a sufficient large sample of FRBs (localized ones with the redshifts being measured) could be used as a useful tool to constrain dark-energy parameters. Particularly, Ref. <cit.> pointed out that, in order to become a useful cosmological tool, at least ∼ 10^4 localized FRBs are needed, which can be used to help break cosmological parameter degeneracies led by CMB. Moreover, Ref. <cit.> concluded that with ∼ 10^4 localized FRBs the results of CMB+FRB are better than those of CMB+BAO; GW+FRB can provide an effective late-universe probe that is comparable with CMB+BAO; CMB+GW+FRB can provide tight constraints on cosmological parameters. The sample size of ∼ 10^4 localized FRBs is consistent with the detection event rate given by different surveys, such as Australian Square Kilometre Array Pathfinder (ASKAP) <cit.> and CHIME outrigger <cit.>. However, the in-construction SKA will definitely have much more powerful capability of detecting and localizing FRBs, which will undoubtedly provide a much larger FRB sample to be used in exploring the nature of dark energy. Therefore, the questions of what extent the nature of dark energy can be explored using the FRB observation in the era of SKA and whether the SKA-era FRB observation can be forged into a precise late-universe probe cry out for answers. Our present work wishes to investigate these issues in depth and in detail and to try to give answers to these questions. An issue closely related to dark energy is the measurement of the Hubble constant using FRBs, which has been extensively investigated in recent literature <cit.>. The Planck CMB data can tightly constrain the Hubble constant H_0 in the ΛCDM cosmology (at a ∼ 0.8% precision), but when a dynamical dark energy is considered in a cosmological model, the CMB data cannot provide a tight constraint on H_0 since the EoS of dark energy is in significant anti-correlation with the Hubble constant. This point has often been employed in the discussions of the possible solution to the “Hubble tension”. In Ref. <cit.>, it was shown that FRBs cannot provide an effective constraint on H_0 either, but fortunately the orientations of degeneracies related to H_0 in the cases of FRBs and CMB are different, leading to the result of the degeneracy inherent in CMB being broken by FRBs. Hence, the combination of FRBs and CMB can give a rather tight constraint on the Hubble constant, which is useful in the case of not considering the Hubble tension. Alternatively, the combination of GW standard sirens and FRBs can be considered to investigate the estimation solely from the late universe, potentially providing a cross-check to the Hubble tension. In addition, an important topic in the area of FRB cosmology is the “missing baryon” problem. Relative to the global density inferred from the big bang nucleosynthesis (BBN) and CMB measurements <cit.>, the detection of baryons in local universe shows a deficit of about 30% <cit.>. To search for the missing baryons and account for the discrepancy between observation and theoretical prediction, one has recognized that the great majority of baryons reside in the IGM; especially the warm and hot ionized medium (WHIM) rather than galaxies <cit.>, making the detection quite challenging because of the diffuse resident. <cit.> derived a complete estimate with only 5 localized ASKAP FRBs, which is consistent with the joint result of BBN and CMB. Nevertheless, the uncertainty in the result of Ref. <cit.> is fairly large (at a ∼ 40% precision), and thus it is necessary to investigate what extent the missing baryon problem can be solved to by using the SKA-era FRB observation. In this paper, we simulate the SKA-era FRB observation and investigate the important cosmological issues using the mock FRB data. Our focuses of scientific problems are on the issues of dark energy, the Hubble constant and baryons. § METHODS AND DATA §.§ Uncertainties in dispersion measures of FRBs When a FRB's emission travels through the plasma, the radio pulse will be dispersed, i.e. photons at higher frequency arrive earlier. The time delay (Δ t) of frequencies ν_1, ν_2 (ν_1<ν_2) can be quantified by DM <cit.>, Δ t ≃ 4.15  s[(ν_1/GHz)^-2-(ν_2/GHz)^-2] DM/10^3 pc cm^-3. The observed DM of a FRB at redshift z, representing the electron density integrated along the line-of-sight (DM=∫n_e dl/(1+z)), can be separated into the following components <cit.>, DM_obs=DM_MW+DM_IGM+DM_host+DM_src/1+z. where DM_MW is the Galactic contribution from the interstellar medium (ISM) and halo of the Milky Way, DM_IGM is the dominant component contributed by the IGM, and the last term represents the contribution from the host galaxy and the source local environment. The component DM_MW can be divided into the ISM-contributed DM_MW,ISM and the halo-contributed DM_MW,halo. Given the Galactic coordinate of a FRB, DM_MW,ISM can be obtained from typical electron density models such as NE2001 <cit.> or YMW16 <cit.>, while the latter can be estimated as DM_MW, halo=50 ∼ 80  pc cm^-3 <cit.>. The extragalactic DM of a FRB can be defined as DM_E≡DM_obs-DM_MW=DM_IGM+DM_host+DM_src/1+z. In particular, DM_IGM is closely related to cosmology, and Macquart relation gives its averaged value, i.e., ⟨DM_IGM⟩=3 c Ω_ b H_0/8 π G m_p∫_0^zf_e(z')f_IGM(z')(1+z')/E(z') d z', where G is the Newton's constant, m_p is the proton mass, f_IGM≃0.83 is the fraction of baryon mass in the IGM <cit.> and E(z) is the dimensionless Hubble parameter related to cosmological models (see Eq. (<ref>) for details). The electron fraction is f_e(z)=Y_ Hχ_e,H(z)+1/2Y_ Heχ_e,He(z), where Y_ H=3/4 and Y_ He=1/4 are the mass fractions of hydrogen and helium, respectively, and χ_e,H and χ_e,He are the ionization fractions of hydrogen and helium, respectively. f_e(z) varies with both χ_e,H and χ_e,He during the hydrogen and helium reionization. We set f_e= 0.875 since both hydrogen and helium are fully ionized at z < 3 <cit.>. From Eq. (<ref>), DM_E is available for a localized FRB with DM_obs and DM_MW determined. If DM_host is treated properly, DM_IGM can be measured and the corresponding uncertainty can be expressed as σ_DM_IGM=[σ_ obs^2+σ_ MW^2+σ_ IGM^2 +(σ_ host,src/1+z)^2]^1/2. The observational uncertainty σ_obs=0.5 pc cm^-3 is derived from the current catalogs of known FRBs <cit.>. With the ATNF pulsar catalog <cit.>, the uncertainty of DM_MW averages about 10 pc cm^-3 for the pulses from high Galactic latitude (|b|>10^∘). The uncertainty σ_IGM from the mean DM_IGM of whole population, regarded as the sightline-to-sightline variance due to the effect of cosmic baryonic inhomogeneity in the IGM, has greater leverage on σ_DM_IGM <cit.>. McQuinn <cit.> figured out three baryonic profile models to determine the deviation. Here we consider the simplest one, namely, the top hat model of a putative σ_ IGM scales with redshift, σ_IGM(z)=173.8 z^0.4 pc cm^-3, which is fitted in a power-law form <cit.>. For an individual in FRB populations, σ_host,src is actually characterized by the distinctive properties, e.g., the type of the host galaxy, the site of FRB in the host, and the near-source plasma, leading to the difficulty for its estimation. Here we adopt σ_host, src = 30  pc cm^-3 as the uncertainty of both DM_host and DM_src. §.§ Event rate of FRBs Owing to the high all-sky FRB rate of ∼ 10^5 per day <cit.>, a large number of FRBs are expected to be accumulated by the upcoming surveys. In order to predict the detectable sample size of localized FRB data in the SKA era, current samples are applied to perform statistical studies, e.g., on event rate distribution <cit.>, energy function <cit.> and volumetric event rate of the FRB population <cit.>. To count the cumulative event number of FRBs above a specific fluence threshold F_ν, a power-law model N(>F_ν) ∝ F_ν^α <cit.> is often used, where the power-law index α = -1.5 is consistent with a non-evolving population in Euclidean space <cit.>. The approach is instructive to estimate the all-sky event rate N_ sky above a given fluence threshold F_ν, which reads N_ sky(>F_ν)=N^'_ sky(F_ν/F^'_ν)^α [sky^-1day^-1], where N^'_ sky is the all-sky event rate at fluence threshold F^'_ν. Notably in the simple power-law model, the measurements of N_ sky will vary greatly with α, which have been widely estimated in different surveys <cit.>. According to the suggestion of <cit.>, we choose SKA1-MID as the optimal array for FRB observation, which reaches a 10σ fluence of F_ν = 14 mJy ms <cit.>. Here, we consider the rate measurements of ASKAP and Parkes telescopes <cit.>, which dominate at 1.4 GHz within the 0.9–1.67 GHz observing band of SKA1-MID. <cit.> obtain N^'_ sky = 1.7_-0.9^+1.5× 10^3 sky ^-1 day ^-1 above 2 Jy ms from Parkes surveys. <cit.> obtain N^'_ sky = 37 ± 8 sky ^-1 day ^-1 above 26 Jy ms based on ASKAP FRB surveys. We simply adopt the Euclidean-expected value of α=-1.5, although by analyzing the Parkes and ASKAP datasets some researches <cit.> report a disagreement with the Euclidean expectation. According to Eq. (<ref>), we can calculate the all-sky rate observed by SKA1-MID based on the results of Parkes and ASKAP. Based on the Parkes result, we have N_sky =(1.7_-0.9^+1.5× 10^3)(0.014 Jy ms/2 Jy ms)^-1.5sky^-1day^-1 = 2.9_-1.5^+2.6× 10^6 sky^-1day^-1, and based on the ASKAP result, we have N_sky =(37 ± 8)(0.014 Jy ms/26 Jy ms)^-1.5sky^-1day^-1 = 3.0_-0.6^+0.7× 10^6 sky^-1day^-1. It is clearly seen that the two results are well consistent with each other. Then we can derive the apparent detection rate N_sur for a given telescope, N_sur=N_sky[sky^-1day^-1]Ω[sky FoV^-1]t_obs [day yr^-1], where Ω is the sky coverage fraction[Note that Ω is calculated as Ω = Ω_sur/Ω_tot in units of sky FoV^-1 (or FoV^-1/sky^-1), where Ω_sur and Ω_tot stand for the solid angle of FoV and the whole sky, respectively.] of the SKA instantaneous observation, and t_obs is the exposure time on source. The mid-frequency array of the first phase of SKA (SKA1-MID) can be used to commensally search for FRBs along with pulsar searches. Equipped with phased array feeds (PAFs) for each dish to realize wide field of view (FoV), SKA1-MID has a ∼ 20 deg^2 effective instantaneous FoV <cit.>, which is used to calculate Ω.[In general, the survey area of the SKA1-MID in Medium-Deep Band 2 can reach ∼ 5,000 deg^2 during the overall observation. Since FRB serves as transient phenomenon, here we take Ω_sur = 20 deg^2 to calculate the event rate of FRBs in the instantaneous FoV of the SKA1-MID equipped with PAFs.] For t_obs, according to Ref. <cit.>, which introduced the strategy of a FRB search on ASKAP viewed as a precursor to the SKA, we take an average of 20% of observing time per year spent observing FRBs, and take t_obs = 20%× 365 day yr^-1. Thus, we can estimate the expected event number of SKA1-MID per year, i.e., N_ sur≃ 1.0_-0.5^+0.9× 10^5 FoV^-1 yr^-1 and 1.0_-0.2^+0.3× 10^5 FoV^-1 yr^-1, which are the predictions based on the Parkes and ASKAP results, respectively. These results reach a consensus with previous assessments <cit.>. We conclude that O(10^5)–O(10^6) FRBs can be detected by SKA1-MID in a 10-year observation. The event localization of FRBs is paramount for FRB cosmology. To identify the host galaxies and hence redshifts, it is necessary to localize FRBs within ∼ 0.1”-0.5” <cit.>. Within an accuracy of a few sub-arcseconds, ASKAP has localized at least 16 FRBs in dozens of its detected samples <cit.>. The SKA is capable of localizing FRBs to the sub-arcsecond level, and thus it is reasonable to assume that all the FRBs detected by the SKA can be simultaneously localized (and their redshifts can also be measured). §.§ Uncertainties in luminosity distances of GW standard sirens We simulate the GW standard sirens based on the Einstein Telescope (ET) <cit.>, considering only the binary neutron star (BNS) merger events. The redshift distribution of BNSs is expressed as <cit.> P(z)∝4π d_ C^2(z)R(z)/H(z)(1+z), where d_ C is the comoving distance and R(z) is the time evolution of the burst rate with the form <cit.> R(z) = 1+2z, z≤ 1, 3/4(5-z), 1 < z < 5, 0, z ≥ 5. The amplitude of the GW signal in Fourier space 𝒜 and the luminosity distance d_L are approximately related by 𝒜∝ 1/d_ L. The luminosity distance in a flat universe is expressed as d_L(z)=(1+z) d_C(z)=(1+z) ∫_0^zd z^'/H(z^'), where H(z) is the Hubble parameter. The total errors in the measurement of d_ L include the instrumental, weak lensing, and peculiar velocity errors, i.e., σ_d_ L=√((σ_d_ L^ inst)^2 + (σ_d_ L^ lens)^2 + (σ_d_ L^ pv)^2). The instrumental error is simulated using the method described in Ref. <cit.>, the weak lensing and peculiar velocity errors are obtained from Refs. <cit.> and <cit.>, respectively. Assuming a 10-year observation span, the ET is expected to detect 1000 GW events from BNS mergers in the redshift range of z≲5 <cit.>. In this work, we simulate this catalog. Other studies on the GW standard sirens from the coalescences of (super)massive black hole binaries based on various space-based GW observatories and pulsar timing arrays can be found in Refs. <cit.>. §.§ Fiducial cosmological models The dark-energy EoS parameter is defined as w(z)=p_ de(z)/ρ_ de(z), with p_ de(z) and ρ_ de(z) pressure and density of dark energy, respectively. In a flat universe, the dimensionless Hubble parameter is given by the Friedmann equation, E^2(z)=H^2(z)/H_0^2= (1 - Ω _ m)exp[3∫_0^z 1 + w(z')/1 + z' d z'] +Ω _ m(1 + z)^3, where Ω_ m is the present-day matter density parameter. In this work, we consider three fiducial cosmological models to generate simulated data of FRBs (the central values of these data). We only consider the simplest and most general cases in the studies of dark energy, namely, the ΛCDM model [w (z) = -1] in which the cosmological constant Λ (equivalent to the vacuum energy density) serves as dark energy, the wCDM model [w(z) = constant] in which dark energy has a constant EoS, and the w_0w_aCDM model [w(z)=w_0+w_az/(1+z)] <cit.> in which dark energy has an evolving EoS characterized by two parameters w_0 and w_a. We use the simulated FRB data and other observational data to constrain the three dark energy models. §.§ Other observational data In this work, we also use other observational data as complementary. In addition to the CMB data, we also use the mainstream late-universe observations, BAO and SN, to combine with CMB. For the CMB observation, we employ the “Planck distance priors" derived from Planck 2018 observation <cit.>. We did not use the full power spectra data of CMB, because the actual observational data are only used as complementary in this work and utilization of CMB distance prior data is convenient and resource saving. For the BAO data, we use five data points from three observations, including the measurements from 6dF Galaxy Survey (6dFGS) at z_ eff = 0.106 <cit.>, Main Galaxy Sample of Data Release 7 of Sloan Digital Sky Survey (SDSS-MGS) at z_ eff = 0.15 <cit.>, and Data Release 12 of Baryon Oscillation Spectroscopic Survey (BOSS-DR12) at z_ eff = 0.38, 0.51 and 0.61 <cit.>. For the SN data, we use the latest Pantheon data set compiled by <cit.>. We use the data combination CMB+BAO+SN (abbreviated as “CBS") to constrain the fiducial cosmological models, and the obtained best-fit cosmological parameters that are used to generate the central values of the simulated FRB data. Moreover, the local-universe measurement result of H_0= 73.04 ± 1.04  km s^-1Mpc^-1 from cosmic distance ladder given by the SH0ES team <cit.> is employed as a Gaussian prior when we discuss the missing baryon problem. §.§ Method of cosmological parameter estimation Currently, the real redshift distribution of FRBs is still unknown. Several studies investigated this distribution using current different FRB databases <cit.>, but no consistent result was obtained due to the limited number of detected FRBs. <cit.> highlighted the significant effect of the assumed redshift distributions on cosmological parameter estimation, leading to different scenarios for the prospects of FRB cosmology. In this work, we assume a constant comoving number density distribution with a Gaussian cut-off z_ cut=1, which makes FRBs have a moderate constraining capability as suggested in Ref. <cit.>. The constant-density distribution function N_const(z) is expressed as N_const(z)=𝒩_constd_C^2(z)/H(z)(1+z) e^-d_L^2(z) /[2 d_L^2(z_cut)], where 𝒩_const is a normalization factor. We generate the mock data of FRBs in the redshift range of 0<z<2.5 tracking the aforementioned distribution. The central values of DMs in the simulated data of FRBs are given by the fiducial models, i.e. the ΛCDM, wCDM and w_0w_aCDM models, that are constrained by the CBS data. The uncertainties in DMs are considered according to the prescription given in Sec. <ref>. We carry out the Markov Chain Monte Carlo (MCMC) analysis <cit.> to derive the posterior probability distributions of cosmological parameters. So we use the χ^2 statistic as a measure of likelihood for the parameter values. The χ^2 function of FRB is defined as χ_FRB^2(θ)=∑_i=1^N_ FRB(DM_IGM^th(z_i; θ)-DM_IGM^obs(z_i)/σ_DM_IGM(z_i))^2, where θ represents the set of cosmological parameters, DM_IGM^th(z_i; θ) is the theoretical value of DM_IGM at the redshift z_i calculated by Eq. (<ref>), DM_IGM^obs(z_i) is the observable value, and σ_DM_IGM(z_i) represents the uncertainty of DM_IGM (see Eq. (<ref>)). The χ^2 function of GW is defined as χ_GW^2(θ)=∑_i=1^N_ GW(d_L^th(z_i; θ)-d_L^obs(z_i)/σ_d_ L(z_i))^2, where d_L^th(z_i; θ) is the theoretical value of d_L at the redshift z_i calculated by Eq. (<ref>), d_L^obs(z_i) is the observable value, and σ_d_ L(z_i) represents the uncertainty of d_L^th (see Eq. (<ref>)) In our work, the Python package emcee <cit.> is employed for the MCMC analysis, and GetDist <cit.> is used for plotting the posterior distributions of the cosmological parameters. We consider a 10-year observation of the SKA for the detections of FRBs. According to our estimation of the FRB detection event rate by the SKA in Sec. <ref>, the 10-year observation of the SKA would detect O(10^5-10^6) FRBs. Thus, in this work, we consider two cases: the normal expected scenario with N_FRB = 10^5 (denoted as FRB1) and the optimistic expected scenario N_FRB = 10^6 (denoted as FRB2). The simulated FRB1 data as an example is shown in Fig. <ref>. § RESULTS AND DISCUSSION We use the simulated FRB observation of the SKA to constrain the cosmological models. In addition, to make comparison and combination, we also use the actual CMB and CBS data. For the priors on Ω_ m, H_0 and Ω_ bh^2, we use uniform distributions within ranges of [0, 0.7], [50, 90] km s^-1 Mpc^-1 and [0, 0.05], respectively. We also adopt priors on w, w_0 and w_a within ranges of [-2, 1], [-2, 2] and [-3, 3] for dynamical dark energy models. The constraint results are summarized in Table <ref>. Here, we use σ(ξ) and ε(ξ)=σ(ξ)/ξ to represent the absolute and relative errors of the parameter ξ, respectively. In the following, we report the constraints on dark-energy parameter, the Hubble constant and baryon density in order. §.§ Dark energy First, we discuss the constraints on the EoS of dark energy by using the FRB data. In Figs. <ref> and <ref>, we show our constraint results on dark-energy EoS parameters for the wCDM and w_0w_aCDM models, respectively. For the wCDM model, the CMB data alone cannot provide a precise constraint on the EoS of dark energy. For example, the CMB data can only give a 15.2% constraint on w. By combining the CMB data with the current late-universe observations, BAO+SN, we can see that the CBS data can lead to a 3.5% constraint. However, we find that the FRB data alone can provide fairly tight constraints on w. Concretely, FRB1 and FRB2 give constraints of the 5.0% and 1.6% precision on w, respectively. Therefore, using about 10^5 FRB data can provide a slightly weaker constraint than the CBS data, and using about 10^6 FRB data can give a much better measurement on w than the CBS data. On combining the CMB data with the FRB1 data, similar to the findings in Ref. <cit.>, we find that this combination can effectively improve the constraints on dark-energy EoS parameters, compared to those solely using either the CMB or FRB data alone. For example, the combination of CMB+FRB1 provides a constraint of σ(w)=0.022, which is a significant improvement of about 85% and 57% compared to using the CMB and FRB1 data alone, respectively. The lower-left panel of Fig. <ref> shows the constraint contours of CMB, FRB1, CMB+FRB1 in the Ω_ m–w plane. The orientations of the contours constrained by CMB and by FRB1 are rather different, indicating that FRB1 can well break the parameter degeneracy induced by CMB. In order to study the extent of this capability, we add the dashed contour of CBS in the figure for comparison with the contour of CMB+FRB1. It is evident that the CMB+FRB1 data can give better constraints than the CBS data, with a quantitative increase in precision of about 37%. This suggests that using about 10^5 FRB data can effectively break the parameter degeneracies induced by the CMB data, even better than using the data BAO+SN. When the number of FRB data is increased from 10^5 (FRB1) to 10^6 (FRB2), the FRB2 data alone can give a very tight constraint of σ(w)=0.016, while the CMB+FRB2 data can only give a slightly improved constraint of σ(w)=0.013 when compared to FRB2 alone. The upper-right panel of Fig. <ref> shows the two-dimensional posterior contours by using CMB, FRB2, CMB+FRB2 and also CBS in the Ω_ m–w plane. The contour from FRB2 is significantly smaller than that of CMB, indicating that the constraint capability from CMB+FRB2 mainly arises from FRB2. By comparing with CBS, we can also see that FRB2 not only gives much better constraints than CMB but even better than CBS. This result once again highlights that, even without the help of CMB, using about 10^6 FRB data can provide a remarkable constraint on w independently, even tighter than the CBS data. For the w_0w_aCDM model, the CMB data can only give a result of σ(w_0)=0.440, which can be greatly improved to σ(w_0)=0.083 by the CBS data. For w_a, it is poorly constrained by the CMB data but can be constrained by the CBS data to σ(w_a)=0.320. We find that using about 10^5 and 10^6 FRB data can also provide tight constraints on both w_0 and w_a. The FRB1 data can give the absolute errors σ(w_0) = 0.14 and σ(w_a) = 0.82. Furthermore, the FRB2 data can give σ(w_0) = 0.053 and σ(w_a) = 0.31. Hence, we can see that using about 10^6 FRB data can provide constraints on w_0 and w_a that are comparable to those from the CBS data. In our joint data analysis of the CMB and FRB data in the w_0w_aCDM model, the inclusion of the FRB data also can effectively break the parameter degeneracy formed by CMB. For example, CMB+FRB1 and CMB+FRB2 can significantly improve constraints on w_0 by 82% and 93%, respectively, compared to using CMB alone. The bottom-middle panel of Fig. <ref> shows the constraint contours of CMB, FRB1, CMB+FRB1 in the w_0–w_a plane, and the center-right panel shows those of CMB, FRB2 and CMB+FRB2. For comparison, the dashed contour of CBS is also added in the figure. We find that, the constraint on w_a from CMB+FRB1 is about 34% better than CBS, and CMB+FRB2 is definitely better than CBS for the constraints both on w_0 and w_a, because FRB2 alone has the constraining capability comparable to CBS as previously reported. Concretely, CMB+FRB2 gives σ(w_0) = 0.030 and σ(w_a) = 0.12, which are both about 60% better than CBS. On combining the CBS data with the FRB data both in the wCDM and w_0w_aCDM models, we find that, in most cases (with the exception of CBS+FRB1 in the w_0w_aCDM model), the errors of constraints are similar with those from combining the CMB data. This suggests that, if the combination FRB+BAO+SN is considered as a late-universe cosmological probe, the constraint capability is mainly from the FRB data of the SKA. In order to comprehensively discuss what role the numbers of FRB data could play in the constraints on cosmological parameters, we present the constraint errors of dark-energy EoS parameters (i.e., w, w_0 and w_a) for 10^3, 10^4, 10^5 and 10^6 FRB data in the upper panels of Fig. <ref>. We find that the errors of these constraints from the FRB data alone, are almost consistent with the 1/√(N) behavior, with N the number of data. We present a summary of the capability of future FRB data in constraining the dark-energy EoS parameters, based on our results of the wCDM and w_0w_aCDM models shown in Fig. <ref>. Using localized FRBs alone to measure dark energy requires a minimum of 10^4 to provide comparable constraints as those from CMB. However, even with this number of FRB data, the constraints are not yet precise. An effective approach is that using FRBs as a complementary cosmological probe to break the parameter degeneracy induced by CMB. For example, using 10^4 FRB data is comparable to using BAO data in breaking the parameter degeneracy of CMB <cit.>. Another approach could be directly utilizing a larger FRB sample, such as the data expected to be accumulated during the SKA era. At that time, the FRB data alone have the potential to break the parameter degeneracy of CMB even more effectively than BAO+SN. Furthermore, the SKA-era FRB data alone can provide constraints at least comparable to those from the CBS data, even without the help of CMB, and independently and precisely measure dark energy in the late universe. Overall, in the era of SKA, localized FRBs can be forged into one of the precise cosmological probes in exploring the late universe to study dark energy. §.§ The Hubble constant Then we report the measurement of the Hubble constant. Using FRB data alone can hardly constrain H_0 since DM_IGM∝Ω_ bH_0 (see Eq. (<ref>)), which results in the strong degeneracy between H_0 and Ω_ bh^2. However, the early-universe measurement by CMB can effectively constrain Ω_ bh^2. Therefore, combining the CMB and FRB data can break the degeneracy and obtain effective constraints on H_0. For the ΛCDM model, the CMB data can precisely constrain H_0 with a 0.9% measurement, but the inclusion of the FRB2 data can improve this by nearly an order of magnitude to almost 0.1%. Even for the dynamical dark energy models, the combination of the CMB and FRB data can constrain the relative error of H_0 to less than 1.0%. The left panel of Fig. <ref> shows the contours of CMB, CBS, FRB1, CMB+FRB1, FRB2 and CMB+FRB2 in the H_0–Ω_ bh^2 plane for the wCDM model, indicating that the degeneracy between H_0 and Ω_ bh^2 inherent in FRB can be effectively broken by CMB+FRB1 and CMB+FRB2. Concretely, CMB+FRB1 gives the constraints of ε(H_0) = 0.5% and 1.0% in the wCDM and w_0w_aCDM models, respectively. Compared to using the CMB data alone, this combination improves the constraints by about 93% and 86%, respectively. CMB+FRB2 further improves these constraints, giving ε(H_0) = 0.2% and 0.4% for the wCDM and w_0w_aCDM models, respectively, with improvements of about 94% and 97% compared to using the CMB data alone, respectively. However, the joint analysis of CMB and FRB may pose challenges as it involves combining observations from both the early and the late universe, and the Hubble tension reflects the tension between the early and the late universe <cit.>. As a result, it is not appropriate using CMB+FRB to study the Hubble tension. Thus, it is essential to investigate the estimation solely from the late universe, potentially providing a cross-check to the Hubble tension. The GW standard siren detection can precisely constrain the Hubble constant, as the luminosity distance d_L to a GW source can be directly obtained from its waveform and then the Hubble constant through the relation of d_L∝ 1/H_0. Therefore, combining the GW and FRB data can also give the effective constraints on H_0. Hence, we investigate the potential benefits of combining the GW standard siren observation with the ET and the FRB observation with the SKA in the coming decades. The constraint results from GW, FRB and GW+FRB are summarized in Table <ref>. The ET's GW standard siren data alone can give a 1.8% constraint on H_0 in the wCDM model, which is better than the CMB measurement. However, combining the GW and FRB data can effectively constrain H_0 better than the CBS data, which is illustrated in the right panel of Fig. <ref>. Compared to using the GW data alone, the inclusion of the FRB1 and FRB2 data can improve the constraints on H_0 by about 58% and 78%, respectively. About other cosmological parameters, while the GW standard siren data can only give a 14% constraint on w, the FRB2 data can give significantly better constraints (with a precision of 1.6%). The errors of constraints from FRB and GW+FRB are very similar. Hence, we do not discuss the dark-energy EoS constraints from GW+FRB. We also present a summary of the capability of future localized FRBs in combination with GWs as an independent probe of low-redshift cosmic expansion, as shown in the middle panels of Fig. <ref>. Combining 10^3 or 10^4 FRBs with GWs can provide H_0 constraints that are roughly comparable to those from CMB+BAO. Nevertheless, using 10^5 SKA-era FRBs in combination with GWs can lead to even better results than current CBS. Moreover, by using 10^6 FRBs in this combination, we can determine H_0 with a precision of less than 1% in all the models, meeting the standards of precision cosmology. This indicates that the combination of future GW and FRB observations can serve as a powerful independent probe to study the late universe expansion history and the Hubble tension. §.§ Baryon density Finally, we report the results on the cosmic baryon density estimated from the SKA-era FRBs. The parameters Ω_ b and Ω_ bh have already been constrained in literature, such as <cit.> and <cit.>, respectively. Thus, in addition to considering the case of free parameters of Ω_m, H_0 and Ω_ bh^2 as the previous subsections, we also considered these two cases of free parameters, i.e. (i) Ω_m, H_0 and Ω_ b; (ii) Ω_m and Ω_ bh. We use FRB1 and FRB2 to determine the credible intervals of Ω_ b, Ω_ bh and Ω_ bh^2 in the ΛCDM model. For the priors of Ω_ b and Ω_ bh, we use uniform distributions within ranges of [0, 0.11] and [0, 0.07], respectively. The priors for the other parameters are the same as previous. In addition, we also consider combining the FRB data with the local H_0 measurement of H_0= 73.04 ± 1.04  km s^-1Mpc^-1 <cit.>, i.e., H_0+FRB, to improve the constraints. The constraint results from using FRB, H_0+FRB and GW+FRB are summarized in Table <ref>. We find that using only the FRB data provides very tight constraints on Ω_ bh, which is better than those on Ω_ b and Ω_ bh^2. This is because FRBs can directly constrain Ω_ bh while measuring the others may lead to related degeneracies (see Eq. (<ref>)). For example, FRB1 gives a 0.3% result of σ(Ω_ bh)=0.000086, while Ω_ b and Ω_ bh^2 are poorly constrained to 8.2% and 17.2%, respectively. On combining the FRB data with the late-universe H_0 measurement, the constraints on Ω_ b and Ω_ bh^2 can be well improved when compared to those from the FRB data alone. For instance, H_0+FRB1 gives a constraint result of σ(Ω_ bh^2)=0.00035, which is a 92% improvement compared to FRB1 alone. For Ω_ bh, the constraints from H_0+FRB are not improved, since H_0 is included in Ω_ bh, and therefore cannot be set as an independent free parameter. Nevertheless, the results on Ω_ bh from H_0+FRB are still better than those on Ω_ b and Ω_ bh^2. In Fig. <ref>, we plot the one-dimensional posterior PDFs of Ω_ b, Ω_ bh and Ω_ bh^2 using the FRB2, H_0+FRB2 and GW+FRB2 data. We find that, the inclusion of the low-redshift H_0 measurement does not affect the best-fit value of Ω_ bh, but it significantly impacts Ω_ b, leading to a discrepancy (∼ 1.5σ) caused by the parameter degeneracy between H_0 and Ω_ b. Due to the Hubble tension, the actual value of H_0 is ambiguous, indicating that the Hubble tension may affect the local baryon census by FRB when discussing baryon density Ω_ b directly, but discussing Ω_ bh can effectively avoid the bias introduced by the Hubble tension. On combining the FRB data with the GW standard siren data, we find that this combination can give significantly better constraint results on all cases of Ω_ b, Ω_ bh and Ω_ bh^2 than combining the late-universe H_0 measurement. In fact, the constraints on Ω_ bh^2 are even better than that from CMB+BBN observations. For example, GW+FRB1 gives σ(Ω_ bh^2)=0.000070, which is twice as precise as σ(Ω_ bh^2)=0.00014 from CMB+BBN. This effect can be also seen in the right panel of Fig. <ref>. The GW data can effectively constrain the Hubble constant, and its inclusion can significantly improve the FRB constraint on Ω_ bh^2 due to the strong degeneracy between H_0 and Ω_ bh^2. This is similar to the method of combining CMB+FRB to constrain H_0. So, this joint approach may offer a very precise late-universe probe of the cosmic baryon density, independent of the early-universe CMB+BBN data. Note that the fiducial values of simulated GW data are consistent with the CBS data, so the constraints from GW+FRB are consistent with the CMB+BBN results but in tension with the H_0+FRB results in the Ω_ b and Ω_ bh^2 estimations. By comparing the results of individual and joint analyses on different baryon density parameters, we conclude that using FRBs alone to constrain Ω_ bh seems appropriate for a local baryon census. There are two reasons: (i) discussing Ω_ bh could be precise, as using FRB alone can provide a very precise constraint on Ω_ bh without the need to combine other data; (ii) discussing Ω_ bh could be accurate, effectively avoiding the bias introduced by the Hubble tension. § CONCLUSION Despite the current cosmology keeps in good concordance, a series of puzzles remain, including dark energy, the Hubble tension and the missing baryon problem. This work investigates how upcoming FRB observation with the SKA can help solve these problems. We consider two scenarios, one with an normal expected detection rate of ∼ 10^4 FRBs per year and another with an optimistic rate of ∼ 10^5 FRBs per year, and construct mock catalogues of 10^5 and 10^6 localized FRBs, respectively, over a 10-year operation time of the SKA. We use MCMC techniques to forecast constraints for cosmological parameters in three typical dark energy models, i.e., the ΛCDM, wCDM and w_0w_aCDM models. We have the following main findings. * The issue of dark energy. We find that, solely using 10^6 FRB data can give very tight constraints on dark-energy EoS parameters, with σ(w)=0.016 in the wCDM model, and σ(w_0)=0.053 and σ(w_a)=0.31 in the w_0w_aCDM model. These results are about 3%–54% better than those from the current mainstream CBS data. Although the use of 10^5 FRBs could not achieve this precision, it is more effective in breaking the inherent parameter degeneracies in CMB than BAO+SN. The joint CMB+FRB data in the wCDM model gives σ(w)=0.022, and in the w_0w_aCDM model, it gives σ(w_0)=0.080 and σ(w_a)=0.21. These results are about over 80% and 30% better than the results by CMB and CBS for both models, respectively. We conclude that during the SKA era, localized FRBs can be developed as a precise cosmological probe for exploring the late universe. This approach has the potential to provide a way to explore dark energy using only one cosmological probe. * The issue of the Hubble constant. We combine the SKA-era FRB data with the CMB or simulated GW standard siren data from ET's 10-year observation, and find that the combinations can effectively constrain the Hubble constant. This is because CMB and GW standard siren can precisely constrain Ω_ bh^2 and H_0, respectively, making the inherent parameter degeneracies in the FRB data be broken. When combined with the GW data, we find that using 10^5 FRB data can give more precise measurements than CBS, and using 10^6 FRB data provides constraint precision of less than 1% in all the models, meeting the standard of precision cosmology. We conclude that both the joint CMB+FRB and GW+FRB results provide powerful methods to study the expansion history of the universe, and particularly the GW+FRB results, being a late-universe measurement, can serve as an independent cross-check to help study the Hubble tension. * The issue of baryon density. We compare the constraints on baryon density parameters Ω_ b, Ω_ bh and Ω_ bh^2 under ΛCDM, and find that using the FRB data alone can give very tight constraints on Ω_ bh. In order to improve the constraints on Ω_ b and Ω_ bh^2 and simultaneously obtain a late-universe result, we can combine the local H_0 measurement and FRB data. However, when applying the local H_0 prior, the Hubble tension may affect the local baryon census by FRB when discussing Ω_ b directly, but discussing Ω_ bh can avoid this bias. Furthermore, combining the GW and FRB data can significantly improve the constraints on Ω_ bh^2, providing a very precise late-universe probe that is independent of CMB+BBN data. We conclude that using FRBs alone to constrain Ω_ bh can precisely and accurately measure baryon density. Cosmology and dark energy serve as the final key project for the SKA, one can map out the universe with much more accuracy by mapping the abundance of FRB detections to their redshift behind the construction of the SKA in 2030s. Moreover, other wide-field telescopes such as the Canadian Hydrogen Observatory and Radio transient Detector (CHORD) <cit.>, Hydrogen Intensity and Real-time Analysis eXperiment (HIRAX) <cit.>, and DSA-2000 <cit.> are also anticipated to contribute to the field by delivering additional samples of localized FRBs. The landscape of FRB cosmology will be very bright and many exciting science returns can be expected in the coming years. We are very grateful to Peng-Ju Wu, Shang-Jie Jin, Ling-Feng Wang, Jing-Zhao Qi and Chen-Hui Niu for helpful discussions. This work was supported by the National SKA Program of China (Grants Nos. 2022SKA0110200 and 2022SKA0110203) and the National Natural Science Foundation of China (Grants Nos. 11975072, 11835009, 11875102, and 11988101).
http://arxiv.org/abs/2307.02564v1
20230705180411
Secure-by-Construction Synthesis for Control Systems
[ "Bingzhuo Zhong", "Siyuan Liu", "Marco Caccamo", "Majid Zamani" ]
eess.SY
[ "eess.SY", "cs.SY" ]
In this paper, we present the synthesis of secure-by-construction controllers that address safety and security properties simultaneously in cyber-physical systems. Our focus is on studying a specific security property called opacity, which characterizes the system's ability to maintain plausible deniability of its secret behavior in the presence of an intruder. These controllers are synthesized based on a concept of so-called (augmented) control barrier functions, which we introduce and discuss in detail. We propose conditions that facilitate the construction of the desired (augmented) control barrier functions and their corresponding secure-by-construction controllers. To compute these functions, we propose an iterative scheme that leverages iterative sum-of-square programming techniques. This approach enables efficient computation of these functions, particularly for polynomial systems. Moreover, we demonstrate the flexibility of our approach by incorporating user-defined cost functions into the construction of secure-by-construction controllers. Finally, we validate the effectiveness of our results through two case studies, illustrating the practical applicability and benefits of our proposed approach. Secure-by-Construction Synthesis for Control Systems]Secure-by-Construction Synthesis for Control Systems ^1TUM School of Engineering and Design, Technical University of Munich, Germany {bingzhuo.zhong,mcaccamo}@tum.de ^2Division of Decision and Control Systems, KTH Royal Institute of Technology, Stockholm, Sweden [email protected] ^3Department of Computer Science, University of Colorado Boulder, USA [email protected] [ Majid Zamani^3 ================== § INTRODUCTION Over the past few decades, cyber-physical systems (CPS) have emerged as the technological foundation of an increasingly intelligent and interconnected world. These systems play a crucial role in various domains, ranging from transportation systems and smart grids to medical devices. With their complexity, CPS enable innovative functionalities and offer high-performance capabilities. However, the safety-critical nature of CPS raises significant concerns. Any design faults or malfunctions in these systems can have catastrophic consequences, potentially resulting in loss of life. Given the tight interaction between the cyber components and physical entities within CPS, they are also more susceptible to various security threats and attacks. Consequently, there is a growing need to address both safety and security concerns in modern CPS. Ensuring the reliable operation of these systems across different domains has become imperative. Several studies, such as those in <cit.>, emphasize the importance of tackling these issues and provide valuable insights into secure CPS design and operation. Various formal verification and synthesis techniques have been investigated to ensure safety in CPS <cit.>. Abstraction-based methods have gained significant popularity in the last two decades for safety analysis of CPS <cit.>. These methods approximate original systems with continuous state and input sets by their finite abstractions, constructed by discretizing the original sets. Unfortunately, this approach often encounters the “curse of dimensionality,” resulting in exponential computational complexity growth with system dimension. Alternatively, set-based approaches <cit.> or (control) barrier functions <cit.> can be utilized for verification and controller synthesis without requiring finite abstractions, while still ensuring the overall CPS safety. In summary, CPS design has increasingly prioritized safety. However, security analysis is often postponed to later stages, leading to expensive and time-consuming validation procedures. These challenges motivate researchers <cit.> to develop an integrated approach that addresses safety and security issues simultaneously for CPS, leveraging the principles of correct-by-construction synthesis <cit.>. The primary objective of this paper is to propose an approach for synthesizing secure-by-construction controllers in safety- and security-critical CPS. To achieve this, we expand the existing correct-by-construction controller synthesis paradigm to encompass security requirements alongside safety considerations, all within a unified framework. To begin exploring the construction of this comprehensive framework, we focus on a specific class of security properties closely tied to CPS information flow, known as “opacity" <cit.>. Opacity, which is a confidentiality property <cit.>, ensures that a system's secret behaviors remain plausibly deniable in the presence of malicious external observers, also referred to as intruders. In this context, a system is deemed opaque if it is impossible for an external intruder to uncover the system's secrets through the information flow. The concept of opacity was initially introduced in <cit.> as a means to analyze the performance of cryptographic protocols. Since then, various opacity notions have been proposed to capture different security requirements and information structures in discrete-event systems (DES) (see, for example <cit.>). Among the most widely adopted notions of opacity are language-based opacity <cit.> and state-based opacity <cit.>, which includes variants such as initial-state opacity <cit.>, current-state opacity <cit.>, infinite-step opacity <cit.>, and pre-opacity <cit.>. Building upon these notions, numerous works have been developed for the verification or synthesis of controllers with respect to opacity in DES modeled as finite-state automata <cit.> or Petri nets <cit.>. Related work. Over the past years, there has been a growing interest in analyzing opacity for continuous-space control systems <cit.>. Concretely, <cit.> extended the concept of opacity in discrete-event systems (DES) to continuous-space cyber-physical systems (CPS) by introducing the notion of approximate opacity for metric systems, which takes into account the imprecision of external observations. Subsequently, various methods were developed to verify whether a given control system is approximately opaque <cit.>. However, only a few recent results <cit.> address the problem of controller synthesis for opacity properties over CPS. Among these works, <cit.> proposed an abstraction-based controller synthesis approach for CPS with finite state sets, leveraging opacity-preserving alternating simulation relations between the original systems and their abstractions. The results presented in <cit.> addressed the controller synthesis problem for finite Markov decision processes that model stochastic systems with security considerations. The results in <cit.> employed a model-free approximation-based Q-learning method to tackle the opacity enforcement problem in linear discrete-time control systems. It is important to note that all the aforementioned results focus on verification or controller synthesis with respect to opacity, without taking safety considerations into account. More recently, <cit.> proposed a two-stage controller synthesis scheme to enforce safety and approximate opacity in control systems with finite output sets. Specifically, they first synthesized an abstraction-based safety controller without considering opacity properties. In the second stage, control inputs and state transitions that violate the approximate opacity of the system are eliminated. It should be noted that this two-stage approach can lead to an overly conservative controller since opacity is addressed in the later stage, potentially imposing excessive restrictions on the safety controller obtained in the first stage. Our contribution. In this paper, we present an abstraction-free scheme for constructing secure-by-construction controllers that enforce safety and security properties simultaneously in control systems with continuous state and input sets. Specifically, we focus on initial-state and infinite-step opacity as the desired security properties, and invariance properties as the safety properties of interest. To synthesize these controllers, we utilize a concept of (augmented) control barrier functions. First, we establish conditions under which (augmented) control barrier functions can be synthesized. We then propose an iterative sum-of-squares (SOS) programming scheme as a systematic approach for computing the desired (augmented) control barrier functions. Additionally, we discuss how user-defined cost functions can be incorporated into the construction of secure-by-construction controllers. Some of the results presented in this paper have been previously introduced in our preliminary work in <cit.>. However, this paper significantly enhances and extends those results in several ways. Firstly, we provide the proofs for all the statements that were omitted in <cit.>. Secondly, we develop secure-by-construction controller synthesis schemes for both initial-state and infinite-step opacity, whereas <cit.> only considered initial-state opacity. Finally, we propose a systematic approach for computing the (augmented) control barrier functions through solving an iterative SOS programming problem, which was not presented in <cit.>. § PROBLEM FORMULATION §.§ Notations and Preliminaries In this paper, we denote by and the set of real numbers and non-negative integers, respectively. These symbols are annotated with subscripts to restrict them in a usual way, e.g., _>0 denotes the set of positive real numbers. For a,b∈ℝ (resp. a,b∈ℕ) with a≤ b, the closed, open and half-open intervals in ℝ (resp. ℕ) are denoted by [a,b], (a,b), [a,b), and (a,b], respectively. Given N ∈ℕ_≥ 1 vectors x_i ∈ℝ^n_i, with i∈ [1;N], n_i∈ℕ_≥ 1, and n = ∑_i n_i, we denote the concatenated vector in ℝ^n by x = [x_1;…;x_N] and the Euclidean norm of x by ‖ x‖. Given a set X, we denote by 2^X the powerset of X. Given a set Y⊆ℝ^2n, we denote by Proj(Y) and Proj(Y) the projection of the set Y on to the first and the last n coordinates, respectively, i.e., Proj(Y):={y∈ℝ^n | ∃ŷ∈ℝ^n, s.t. [y;ŷ]∈ Y}, and Proj(Y): = {ŷ∈ℝ^n | ∃ y∈ℝ^n, s.t. [y;ŷ]∈ Y}. Given a matrix A, we denote by A^⊤, trace(A), and {A}_i,j, the transpose, the trace, and the entry in the i-th row and j-th column of A, respectively. Given a_1,…,a_n∈ℝ, we denote by Diag(a_1,…,a_n) a diagonal matrix with a_1,…,a_n on its diagonal. Given sets X and Y, the complement of X with respect to Y is defined as Y \ X ={x ∈ Y | x ∉ X}. Given functions f: X →Y and g: A → B, we define f × g : X × A → Y × B. Moreover, we recall some definitions from <cit.>, which are required throughout this paper. (Monomial and matrix monomial) Consider x:=[x_1;…;x_n]∈ℝ^n. A monomial m:ℝ^n→ℝ in x is a function defined as m(x):=x_1^α_1x_2^α_2⋯ x_n^α_n, with α_1,…,α_n∈ℕ, and we denote by ℳ(x) the sets of all monomials over x∈ℝ^n. The degree of the monomial m(x) is defined as deg(m(x)):=∑_i=1^nα_i. Similarly, a function M:ℝ^n→ℝ^r_1× r_2 is a matrix monomial if {M(x)}_i,j∈ℳ(x), ∀ i∈[1,r_1], ∀ j∈[1,r_2]. We denote by ℳ^𝗆(x) the set of all matrix monomials over x∈ℝ^n. Furthermore, the degree of a matrix monomial M(x) is defined as deg(M(x)):=max_i∈[1,r_1],j∈[1,r_2]deg({M(x)}_i,j). (Polynomial and matrix polynomial) A polynomial h:ℝ^n→ℝ of degree d is a sum of a finite number of monomials, as p(x):=∑_i^N_𝗂c_im_i(x), with c_i∈ℝ, m_i(x)∈ℳ(x), and d:=max_c_i≠ 0deg(m_i(x)). We denote by 𝒫(x) the set of polynomials over x∈ℝ^n. Moreover, a function 𝐏:ℝ^n→ℝ^r_1× r_2 is a matrix polynomial if {𝐏(x)}_i,j∈𝒫(x), ∀ i∈[1,r_1], ∀ j∈[1,r_2]. We denote by 𝒫^𝗆(x) the set of matrix polynomials over x∈ℝ^n. Accordingly, the degree of the matrix polynomial 𝐏(x) is defined as deg(𝐏(x)):=max_i∈[1,r_1],j∈[1,r_2]deg({𝐏(x)}_i,j). (SOS polynomial and SOS matrix polynomial) A polynomial p(x)∈𝒫(x) is a sum-of-square (SOS) polynomial if there exists p_1(x),…,p_𝗂(x)∈𝒫(x) such that p(x)=∑_i=1^𝗂p_i^2(x). Similarly, 𝒫_S(x) denotes the set of all SOS polynomials over x∈ℝ^n. Moreover, a matrix polynomial 𝐏(x)∈𝒫^𝗆(x) is an SOS matrix polynomial if there exists 𝐏_1(x),…,𝐏_𝗃(x)∈𝒫^𝗆(x) such that 𝐏(x)=∑_i=1^𝗃𝐏_i^⊤(x)𝐏_i(x). The set of SOS matrix polynomials over x∈ℝ^n is denoted by 𝒫_S^𝗆(x). §.§ Main Problem In this paper, we focus on discrete-time control systems, as defined below. A discrete-time control system (dt-CS) Σ is a tuple Σ:=(X, X_0, U,f, Y,h), in which X⊆ℝ^n, X_0 ⊆ X ⊆ℝ^n, U ⊆ℝ^m, and Y⊆ℝ^q denote the state set, initial state set, input set, and output set, respectively. The function f: X× U → X is the state transition function, and h: X → Y is the output function. A dt-CS Σ can be described by Σ:{[ x(k+1)= f(x(k),ν(k)),; y(k)= h(x(k)), k∈ℕ, ]. in which x(k)∈ X, ν(k)∈ U, and y(k)∈ Y. Moreover, we denote by ν: = (ν(0),…,ν(k),…) an input run of Σ, and by 𝐱_x_0,ν:= (x(0),…,x(k),…) a state run of Σ starting from initial state x_0 under input run ν, i.e., x(0)=x_0, x(k+1)=f(x(k),ν(k)), ∀ k∈ℕ. Additionally, given a controller C:X → 2^U applied to the system Σ, ν is called an input run generated by C if ν(k)∈ C(x(k)), ∀ k∈ℕ. We further denote by Σ_C = Σ× C the closed-loop system under the feedback controller C. In this paper, we focus on designing a secure-by-construction controller for the dt-CS as in Definition <ref> while simultaneously considering security and safety properties in the controller design procedure. Here, the safety properties of interest are formally defined below. Consider a dt-CS Σ as in Definition <ref>, an unsafe set X_d ⊆ X, and a controller C. The closed-loop system Σ_C:=Σ× C is safe if ∀ x_0∈ X_0, one has 𝐱_x_0,ν(k)∉ X_d, ∀ k ∈ℕ, where ν(k)∈ C(x(k)). In other words, the desired safety property requires that any state run of the system must not enter the unsafe set. Meanwhile, the security properties are expressed as an information-flow security property called opacity. In this context, we assume the existence of an outside observer (a.k.a. intruder) that knows the system model. Without actively affecting the behavior of the system, the intruder aims to infer certain secret information about the system by observing the output sequences remotely. In this paper, we focus on two important state-based notions of opacity called approximate initial-state opacity and approximate infinite-step opacity <cit.>, which can be used to model security requirements in a variety of applications, including secure cryptographic protocols and tracking problems in sensor networks <cit.>. Here, we denote by X_s :={X_s^init,X_s^inf} the set of secret states, in which X_s^init,X_s^inf⊆ X denote the secret state sets related to approximate initial-state and infinite-step opacity, respectively. In the rest of this paper, we incorporate the unsafe and secret state sets X_d and X_s in the system definition and use Σ=(X, X_0, X_s, X_d,U,f, Y,h) to denote a dt-CS under safety and security requirements. The formal definitions of approximate initial-state and infinite-step opacity are then recalled from <cit.> as follows. Consider a dt-CS Σ=(X,X_0,X_s, X_d,U,f,Y,h) and a constant δ∈ℝ_≥ 0. System Σ is said to be * δ-approximate initial-state opaque if for any x_0 ∈ X_0 ∩ X^init_s and any finite state run 𝐱_x_0,ν=(x_0,…, x_n), there exists a finite state run 𝐱_x̂_0,ν̂=(x̂_0,…, x̂_n), with x̂_0 ∈ X_0 ∖ X^init_s, such that ‖ h(x_i)-h(x̂_i)‖≤δ, ∀ i∈[0,n]. * δ-approximate infinite-step opaque if for any x_0 ∈ X_0, any finite state run 𝐱_x_0,ν=(x_0,…, x_n), and any k ∈ [0,n] such that x_k ∈ X^inf_s, there exists a finite state run 𝐱_x̂_0,ν̂=(x̂_0,…, x̂_n), such that ‖ h(x_i)-h(x̂_i)‖≤δ, ∀ i∈[0,n], where x̂_0 ∈ X_0 and x̂_k ∈ X ∖ X^inf_s. Intuitively, δ-approximate initial-state opacity requires that the intruder is never certain whether the system was initiated from a secret state; δ-approximate infinite-step opacity requires that the intruder is never certain that whether the system is/was at a secret state for any time instant k ∈ℕ. Here, constant δ captures the imprecision of the intruder's observation. It is also worth noting that to enforce δ-approximate initial-state (or infinite-step) opacity over Σ, the secret of the system should at least not be revealed initially; otherwise, both notions of δ-approximate opacity are trivially violated. Hence, we assume, without loss of generality: ∀ x_0 ∈ X_0 ∩ X'_s, {x ∈ X | ‖ h(x) - h(x_0)‖≤δ}⊈ X'_s. with X'_s∈{X_s^init,X_s^inf}. Now, we are ready to formulate the main problem to be tackled in this paper. Consider a dt-CS Σ=(X,X_0,X_s,X_d,U,f,Y,h) and a constant δ∈ℝ_≥ 0. Synthesize a secure-by-construction controller C:X → 2^U (if existing) such that the closed-loop system Σ_C = Σ× C satisfies: * (Safety) Σ_C is safe, i.e., (<ref>) holds. * (Security) Σ_C is δ-approximate initial-state (resp. infinite-step) opaque as in Definition <ref>. Additionally, we deploy the following running example throughout this paper to better illustrate the theoretical results. (Running example) We consider a car moving on a road, as depicted in Figure <ref>. A malicious intruder, with observation precision δ=0.94, remotely tracks the car's position. Here, the car's initial location, denoted as X_0, holds confidential information as it performs a secret task (e.g., transferring money from a bank to an ATM). Furthermore, the region X^inf_s represents a special-purpose lane on the road. If the intruder confirms the car's entry into this lane, they deduce the car's involvement in a confidential task. Our objective is to construct a controller that ensures safety (i.e., keeping the car on the road) while avoiding disclosure of secret information to the intruder (i.e., whether the car is executing a confidential task or not). In the running example, we consider a car modeled by [ x_1(k+1); x_2(k+1) ] = [ 1 Δτ; 0 1 ][ x_1(k); x_2(k) ]+[ Δτ^2/2; Δτ ]ν(k), y(k) = x_1(k), where x_1 and x_2 are the absolute position and velocity of the car in the road frame, respectively; u∈[-4,4] m/s^2 is the acceleration of the car as the control input; Δτ=0.1s is the sampling time; and y is the output of the system which is observed by a malicious intruder. Here, we consider the state set X := [-10, 10] × [-6,6], initial set X_0 := [2.7, 3.3] ×{0}, secret sets X^init_s:= [2.8, 3.2] ×{0} and X^inf_s:= [5.57, 7.5] × [-5, 5], and unsafe set X_d:={[-10,-6.5)∪(7.5,10]}×{[-6,-5)∪(5,6]}. In straightforward terms, it is necessary for the car to remain within the range of [-6.5,7.5] and not exceed an absolute velocity of 5 m/s to ensure safety. Additionally, the desired security requirement can be described using the concepts of δ-approximate initial-state and infinite-step opacity, as defined in Definition <ref>, where δ is set to 0.94. ♢ § SYNTHESIS OF SECURE-BY-CONSTRUCTION CONTROLLERS In this section, we discuss how to construct secure-by-construction controllers as introduced in Problem <ref>. Concretely, we first propose in Section <ref> notions of (augmented) control barrier functions for enforcing both safety and opacity properties. Leveraging these functions, we then discuss in Section <ref> how to design secure-by-construction controllers by solving a quadratic program (QP) considering some user-defined cost functions. §.§ Control Barrier Functions for Secure-by-Construction Controller Synthesis Consider a dt-CS Σ=(X,X_0,X_s,X_d,U,f,Y,h) as in Definition <ref>. To tackle Problem <ref>, we introduce an augmented system associated with Σ, which is the product between Σ and itself, defined as Σ×Σ = (X × X, X_0 × X_0, X_s × X_s,X_d × X_d,U × U,f × f, Y × Y, h × h). Here, we denote by (x,x̂) ∈ X × X a state pair of Σ×Σ, and by (𝐱_x_0,ν, 𝐱_x̂_0,ν̂) the state trajectory of Σ×Σ starting from (x_0, x̂_0) under input run (ν, ν̂). Moreover, we use ℛ=X × X to represent the augmented state set. Having the augmented system, we show that one can synthesize controllers to enforce both safety and security properties over dt-CS by leveraging a notion of (augmented) control barrier functions. To this end, the following definition is required. Consider a dt-CS Σ=(X,X_0,X_s,X_d,U,f,Y,h) as in Definition <ref>. Given some sets ℛ_0, ℛ_d⊂ℛ, function ℬ:X →ℝ is called a control barrier function (CBF), and function ℬ_O:X × X →ℝ is called an augmented control barrier function (ACBF) with respect to ℛ_0 and ℛ_d, if there exists C:X→ 2^U such that the following conditions hold: * (Cond.1) X_0 ⊆𝒮; * (Cond.2) X_d ⊆ X\𝒮; * (Cond.3) ℛ_0 ⊆𝒮_O; * (Cond.4) ℛ_d ⊆ℛ\𝒮_O; * (Cond.5) ∀ x ∈𝒮, ∀ u∈ C(x), one has f(x,u)∈𝒮; * (Cond.6) ∀ (x,x̂) ∈𝒮_O, ∀ u∈ C(x), ∃û∈ U, such that one has (f(x,u),f(x̂,û))∈𝒮_O; where sets 𝒮 and 𝒮_O are defined as 𝒮 := {x∈ℝ^n | ℬ(x)≤ 0 }; 𝒮_O := {(x,x̂)∈ℝ^n ×ℝ^n | ℬ_O(x,x̂)≤ 0 }. From now on, sets 𝒮 and 𝒮_O are called CBF-based invariant set (CBF-I set) and ACBF-based invariant set (ACBF-I set) associated with sets ℛ_0 and ℛ_d, respectively. Note that the concrete forms of sets ℛ_0 and ℛ_d depend on the opacity properties of interest (cf. (<ref>), (<ref>), (<ref>), and (<ref>)). More specifically, to synthesize secure-by-construction controllers based on these sets enforcing both safety and approximate initial-state opacity, we define sets: ℛ^init_0:= {(x,x̂) ∈ (X_0 ∩ X^init_s) × X_0 ∖ X^init_s |‖ h(x)-h(x̂)‖≤δ-ϵ}, ℛ^init_d:= {(x,x̂)∈ℛ | ‖ h(x)-h(x̂)‖≥δ}, where δ∈ℝ_≥ 0 captures the security level of the system as in Definition <ref> and ϵ∈[0,δ] is an arbitrary real number. Based on these sets, we propose one of the main results in this paper for synthesizing secure-by-construction controllers. Consider a dt-CS Σ as in Definition <ref>. Suppose that there exist functions ℬ:X →ℝ, ℬ_O:X × X →ℝ, and C:X→ 2^U, such that conditions (Cond.1) - (Cond.6) in Definition <ref> hold, with sets ℛ_d = ℛ^init_d, and ℛ_0≠∅ satisfying Proj(ℛ^init_0) ⊆Proj(ℛ_0) , Proj(ℛ_0) ⊆Proj(ℛ^init_0), in which sets ℛ^init_0 and ℛ^init_d are defined in (<ref>) and (<ref>), respectively. Then, C(x) is a secure-by-construction controller that enforces safety and approximate initial-state opacity of Σ simultaneously, as stated in Problem <ref>. The proof of Theorem <ref> is provided in the Appendix <ref>. Essentially, the desired safety property is enforced by (Cond.1), (Cond.2), and (Cond.5), while (Cond.3), (Cond.4), and (Cond.6) enforce the opacity property. In some cases, it may not be easy to find ℛ_0 satisfying (<ref>) even if such ℛ_0 exists. To solve this issue, we propose a corollary which can be used to construct controllers enforcing safety and approximate initial-state opacity without requiring the concrete form of the set ℛ_0 satisfying (<ref>). Consider a dt-CS Σ as in Definition <ref>. Suppose (Cond.1)-(Cond.2) and (Cond.4)-(Cond.6) in Definition <ref> hold for some functions ℬ:X →ℝ, ℬ_O:X × X →ℝ, and C:X→ 2^U, with set ℛ_d = ℛ^init_d. If ∀ x∈Proj(ℛ^init_0), ∃x̂∈Proj(ℛ^init_0), such that (x,x̂)∈𝒮_O, with ℛ^init_0 as in (<ref>), or, equivalently, max_x∈Proj(ℛ^init_0) min_x̂∈Proj(ℛ^init_0)ℬ_O(x,x̂)≤ 0, then controller C(x) enforces safety and approximate initial-state opacity as in Problem <ref>. The proof of Corollary <ref> is provided in the Appendix <ref>. Note that one may deploy existing results, e.g. <cit.>, to tackle the problem in (<ref>). Next, we proceed with discussing the design of secure-by-construction controllers enforcing safety and approximate infinite-step opacity. Given δ∈ℝ_≥ 0, we define sets ℛ^inf_0 and ℛ^inf_d as in (<ref>) and (<ref>), respectively, ℛ^inf_0 :={(x,x̂) ∈ (X_0 ∖ X^inf_s) × X_0 | ||h(x) - h(x̂)|| ≤δ-ϵ}∪{(x,x̂) ∈ (X_0 ∩ X^inf_s)× X_0∖ X^inf_s | ||h(x) - h(x̂)|| ≤δ-ϵ}, ℛ^inf_d :={(x,x̂)∈X^inf_s × (X ∖ X^inf_s) | ||h(x) - h(x̂)|| ≥δ}∪{(x,x̂) ∈ (X ∖ X^inf_s) ×X ||h(x) - h(x̂)|| ≥δ}∪ X^inf_s ×X^inf_s , where ϵ∈[0,δ] is any arbitrary real number. Then, one can deploy the next result to synthesize secure-by-construction controllers enforcing safety and approximate infinite-step opacity. Consider a dt-CS Σ as in Definition <ref>. Suppose that there exist functions ℬ:X →ℝ, ℬ_O:X × X →ℝ, and C:X→ 2^U such that conditions (Cond.1) - (Cond.6) in Definition <ref> hold, with sets ℛ_d= ℛ^inf_d, and ℛ_0≠∅ such that Proj(ℛ^inf_0) ⊆Proj(ℛ_0) , Proj(ℛ_0) ⊆Proj(ℛ^inf_0), hold, where ℛ^inf_0 and ℛ^inf_d are defined as in (<ref>) and (<ref>), respectively. Then, C(x) is a secure-by-construction controller that enforces safety and approximate infinite-step opacity of Σ, as in Problem <ref>. The proof of Theorem <ref> is given in the Appendix <ref>. Similar to Corollary <ref>, one can deploy the next corollary to build controllers enforcing safety and infinite-step opacity without explicitly having the concrete form of ℛ_0 satisfying (<ref>). Consider a dt-CS Σ as in Definition <ref>. Suppose one can find functions ℬ:X →ℝ, ℬ_O:X × X →ℝ, and C:X→ 2^U, such that (Cond.1)-(Cond.2) and (Cond.4)-(Cond.6) in Definition <ref> hold, with set ℛ_d = ℛ^inf_d. If ∀ x∈Proj(ℛ^inf_0), ∃x̂∈Proj(ℛ^inf_0), such that (x,x̂)∈𝒮_O, with ℛ^inf_0 as in (<ref>), or, equivalently, max_x∈Proj(ℛ^inf_0) min_x̂∈Proj(ℛ^inf_0)ℬ_O(x,x̂)≤ 0, then controller C enforces safety and approximate infinite-step opacity as in Problem <ref>. Due to space limitation, we omit the proof for Corollary <ref> since it is similar to that of Corollary <ref>. example-1 [continued] (Running example) With δ =0.94, we construct sets ℛ^init_0 in (<ref>), ℛ^init_d in (<ref>), ℛ^inf_0 in (<ref>), and ℛ^inf_d in (<ref>) as in (<ref>)-(<ref>), respectively, with ϵ:=0.01: ℛ^init_0 ={[x_1;x_2] ∈ [2.8, 3.2] ×{0},[x̂_1;x̂_2] ∈ [2.7, 2.8)∪(3.2, 3.3] ×{0}| ||x_1 - x̂_1|| ≤δ - ϵ}; ℛ^init_d ={(x, x̂) ∈ X × X | ||x_1 -x̂_1|| ≥δ}; ℛ^inf_0 ={[x_1;x_2] ∈ [2.7, 3.7] ×{0},[x̂_1;x̂_2] ∈ [2.7, 3.3] ×{0}| ||x_1 - x̂_1|| ≤δ - ϵ}; ℛ^inf_d ={(x_1, x_2) ∈ [5.57, 7.5] × [-6,6], [x̂_1;x̂_2] ∈([-10, 5.57)∪(7.5,10])× [-6,6] | ||x_1 -x̂_1|| ≥δ}∪{(x_1, x_2) ∈ ([-10, 5.57)∪(7.5,10]) × [-6,6], [x̂_1;x̂_2] ∈([-10,10])× [-6,6] | ||x_1 -x̂_1|| ≥δ}∪{(x, x̂) ∈ X^inf_s × X^inf_s}. Accordingly, for the desired δ-approximate initial-state opacity, one can select ℛ_0 satisfying (<ref>) as: {[x_1;x_2] ∈ [2.8, 3.2] ×{0},[x̂_1;x̂_2] ∈ [2.75, 2.8)∪(3.2, 3.25] ×{0}| ||x_1 - x̂_1|| ≤δ - ϵ}. As for the desired δ-approximate infinite-step opacity, one can select ℛ_0 satisfying (<ref>) as: {(x,x̂)∈ X_0 × X_0 | x = x̂}. So far, we have proposed notions of CBF and ACBF for constructing secure-by-construction controllers to enforce both safety and opacity properties. Note that these controllers are in general set-valued maps, i.e., assigning to each x∈ X a set of feasible control inputs (see Theorems <ref> and <ref>). To provide a single input at each time instant, instead of randomly selecting a control input within the admissible set, one can introduce a user-defined cost function and compute a single control input at each state by solving a QP. In the next section, we explain how to construct secure-by-construction controllers by incorporating user-defined cost functions. §.§ Design of Secure-by-Construction Controllers with User-defined Cost Functions Consider a user-defined cost function denoted by 𝒥: X× U→ℝ. One can construct secure-by-construction controllers by minimizing such a function using Corollary <ref>. Consider a dt-CS Σ, its associated augmented system Σ×Σ as in (<ref>), and a user-defined cost function 𝒥: X× U→ℝ. Suppose that one obtains CBF ℬ(x) and ACBF ℬ_O(x,x̂) by leveraging Theorems <ref> and <ref>. Then, the closed-loop system Σ_C := Σ× C is safe and δ-approximate initial-state (resp. infinite-step) opaque as described in Problem <ref>, with controller C generating control input by solving the following optimization problem OP at each time step k∈ℕ: OP: min_u∈ U 𝒥(x(k),u) ℬ_O(f(x(k),u),f(x̂(k),û))≤ 0, ℬ(f(x(k),u))≤ 0, and û∈ U, with (x(k), x̂(k)) being state of the augmented system Σ×Σ at time step k. The proof of Corollary <ref> is provided in the Appendix <ref>. Here, the architecture of the controller by leveraging Corollary <ref> is depicted in Figure <ref> and summarized in Algorithm <ref>. Note that we introduced an internal memory state (denoted by x̂ in Figure <ref> and Algorithm <ref>) for the controller. As a result, one only needs to solve a minimization problem, instead of dealing with a problem which is difficult to solve in real-time. 0.02em [t!] A dt-CS Σ=(X, X_0, X_s, X_d,U,f,Y,h) and its associated augmented system Σ×Σ as in (<ref>); CBF ℬ(x) and ACBF ℬ_O(x,x̂) as in Theorem <ref>; a user-defined cost function 𝒥: X× U→ℝ; and the initial state x_0. Control input u(k) at each time step k∈ℕ. k=0, x(0)=x_0. True k = 0 Set x̂(0)=x̂_0∈ X_0, such that ℬ_O(x_0,x̂_0)≤ 0. Update x̂(k) as x̂(k):=f(x̂(k-1),û(k-1)). Obtain the current state x(k). Compute u(k) and û(k) by solving the optimization problem OP in Corollary <ref>, with x:=x(k), x̂:=x̂(k). k=k+1. Running mechanism of secure-by-construction controllers incorporating a user-defined cost function. 0.02em If one only focuses on enforcing approximate initial-state opacity and x_0∈ X_0\ X^init_s, then one can select x̂_0=x_0, and set ν̂(k)=ν(k) for all k∈ℕ so that (<ref>) holds trivially. Accordingly, the optimization problem OP in Corollary <ref> can be reduced to OP: min 𝒥(x,u) ℬ(f(x,u))≤ 0, with u∈ U, x=𝐱_x_0,ν(k). Intuitively, if the system does not start from the secrete region X^init_s, i.e., x_0∉ X^init_s, then any state trajectory 𝐱_x_0,ν always fulfills (<ref>). Hence, the conditions for the desired approximate initial-state opacity hold trivially. Next, we consider again the running example for introducing the desired user-defined cost function for this example. example-1 [continued] (Running example) Here, we consider the following cost function for the running example: 𝒥(x,u) := (x-x_set)^⊤M_1 (x-x_set) + 0.1u^2, where M_1:= Diag(30, 1), and x_set is the desired set point. ♢ So far, we have introduced how to construct secure-by-construction controllers (incorporating user-defined cost functions) by leveraging the notions of CBF and ACBF. In the next section, we focus on the computation of CBF and ACBF over systems with polynomial transition and output functions, and semi-algebraic sets X_0, X_s, X_d, and X (i.e., these sets are described by polynomial equalities and inequalities, cf. Assumption <ref>). In this case, one can use SOS programming <cit.> to compute polynomial-type CBF and ACBF leveraging existing semi-definite-programming (SDP) solver (e.g.,  <cit.>) and  <cit.>). § ITERATIVE SUM-OF-SQUARE (SOS) PROGRAMMING FOR SYNTHESIZING CBF AND ACBF §.§ SOS Conditions for Computing CBF and ACBF In this section, we focus on computing CBF and ACBF over systems with polynomial transition and output functions, and semi-algebraic sets X_0, X_s, X_d, and X. Here, we formulate these functions and sets in the next assumption. Consider a dt-CS Σ=(X,X_0,X_s,X_d, U,f,Y,h) as in Definition <ref>. We assume: * f(x,u)∈𝒫(x,u), h(x)∈𝒫(x). * The sets X, X_0, X_s, and X_d are defined as X =⋃_𝗑=1^n_𝗑 X_𝗑, X_0 =⋃_𝖺=1^n_𝖺 X_0,𝖺, X_s =⋃_𝖻=1^n_𝖻 X_s,𝖻, X_d =⋃_𝖼=1^n_𝖼 X_d,𝖼, where n_𝗑,n_𝖺,n_𝖻,n_𝖼∈ℕ are some known integers, and X_𝗑 :={x∈ℝ^n | μ_𝗑,k(x)≥ 0, k ∈ [1,𝗄(𝗑)], 𝗄(𝗑)∈ℕ}, X_0,𝖺 :={x∈ℝ^n | α_𝖺,k(x)≥ 0, k ∈ [1,𝗄_0(𝖺)], 𝗄_0(𝖺)∈ℕ}, X_s,𝖻 :={x∈ℝ^n | β_𝖻,k(x)≥ 0, k ∈ [1,𝗄_s(𝖻)], 𝗄_s(𝖻)∈ℕ}, X_d,𝖼 :={x∈ℝ^n | γ_𝖼,k(x)≥ 0, k ∈ [1,𝗄_d(𝖼)], 𝗄_d(𝖼)∈ℕ}, with μ_𝗑,k(x), α_𝖺,k(x), β_𝖻,k(x), γ_𝖼,k(x) ∈𝒫(x) being some known polynomial functions. * The input set U is defined as U := {u∈ℝ^m | ρ_j(u)≤ 0, j = [1,𝗃] }⊂ℝ^m, with ρ_j(u)∈𝒫(u) being some known polynomial functions. Based on Assumption <ref>, both ℛ^init_d in (<ref>) and ℛ^inf_d in (<ref>) can be rewritten, without loss of generality, as ℛ_d := ⋃_𝖾=1^n_𝖾{(x,x̂)∈ℛ|λ_𝖾,r(x,x̂)≥ 0, r ∈ [1,𝗋(𝖾)],𝗋(𝖾)∈ℕ}, with n_𝖾∈ℕ being a known integer, and λ_𝖾,r(x,x̂) ∈𝒫(x,x̂) being some known polynomial functions. Therefore, for simple presentation, we simply use the notation ℛ_d in the following discussion and do not distinguish between ℛ^init_d and ℛ^inf_d unless necessary. Similarly, we focus on those ℛ_0 satisfying (<ref>) or (<ref>) being of the form of ℛ_0:=⋃_𝖽=1^n_𝖽{(x,x̂)∈ℛ|κ_𝖽,r(x,x̂)≥ 0,r ∈ [1,𝗋(𝖽)],𝗋(𝖽)∈ℕ}, with n_𝖽∈ℕ being a known integer, and κ_𝖽,r(x,x̂) ∈𝒫(x,x̂) being some known polynomial functions. Additionally, we focus on CBF ℬ as in (<ref>) and ACBF ℬ_O as in (<ref>) in the form of polynomial functions. Then, one can find ℬ and ℬ_O leveraging the next result. Consider a dt-CS Σ as in Definition <ref> such that Assumption <ref> holds, and a constant δ∈ℝ_≥ 0 as in Definition <ref>. If there exists ℬ(x), k_a(x)∈𝒫(x), and ℬ_O(x,x̂), k̂_a(x,x̂)∈𝒫(x,x̂), ∀ a ∈[1,m] such that (<ref>)-(<ref>) hold, then functions ℬ(x), ℬ_O(x,x̂) are CBF and ACBF as in (<ref>) and (<ref>), respectively. The proof of Theorem <ref> can be found in the Appendix <ref>. Intuitively, conditions (<ref>)-(<ref>) correspond to (Cond.1) - (Cond.6) in Definition <ref>, respectively, while (<ref>) and (<ref>) ensure the existence of u,û∈ U in (Cond.5) and (Cond.6) in Definition <ref>, respectively, with u=[k_1(x);…;k_m(x)], and û=[k̂_1(x,x̂);…;k̂_m(x,x̂)]. So far, we have proposed SOS conditions under which CBF and ACBF exist for a given dt-CS. Next, we proceed with discussing how to compute CBF and ACBF systematically via an iterative scheme by leveraging these conditions. §.§ Iterative Scheme for Computing CBF and ACBF One may notice that constraints (<ref>)-(<ref>) are bilinear between functions ℬ(x), ℬ_O(x,x̂), k_a(x), k̂_a(x,x̂), with a∈[1,m], and those unknown (SOS) polynomial multipliers. Here, we introduce an iterative scheme to compute these functions over dt-CS that are polynomial control-affine systems <cit.>, described as x(k+1):= Aℋ(x(k))x(k)+B𝒰(x(k))ν(k),k∈ℕ, where A ∈ℝ^n× N_x and B ∈ℝ^n× N_u are some known constant matrices, and 𝒰(x),ℋ(x)∈ℳ^𝗆(x) are some known matrix monomials with appropriate dimensions. Concretely, in the proposed iterative scheme, we will first compute an initial CBF-I set, denoted by 𝒮^init, as well as an initial ACBF-I set, denoted by 𝒮_O^init, such that (Cond.2), (Cond.4), (Cond.5), and (Cond.6) in Definition <ref> are satisfied (these conditions correspond to (<ref>), and (<ref>)-(<ref>), respectively). Then, we will propose an iterative scheme to expand the regions characterized by 𝒮^init and 𝒮_O^init. In each iteration, we will check whether or not (<ref>) (resp. (<ref>)) and (<ref>) (resp. (<ref>)) hold over the expanded version of CBF-I set (referred to as expanded CBF-I set and denoted by 𝒮^exp) and of ACBF-I set (referred to as expanded ACBF-I set and denoted by 𝒮_O^exp). §.§.§ Computation of Initial CBF-I and ACBF-I Sets To compute the initial CBF-I set 𝒮^init and the initial ACBF-I set 𝒮_O^init over the system as in (<ref>), one first selects sets X̅ := {x∈ X| a_ix≤ 1, i ∈ [1,𝗂] }⊆ X\ X_d, ℛ̅ := {(x,x̂)∈ℛ|b_t[x;x̂]≤ 1,t ∈ [1,𝗍]}⊆ℛ\ℛ_d, with a_i∈ℝ^n, and b_t∈ℝ^2n being some known constant vectors, X and X_d being as in (<ref>), ℛ_d being as in (<ref>), and ℛ being the state set of Σ×Σ. Additionally, the following assumption is required for computing 𝒮^init and 𝒮_O^init. Consider a polynomial system as in (<ref>), and sets X̅, ℛ̅ as in (<ref>) and (<ref>), respectively. We assume the following conditions hold: [ Q 𝗀(x)^⊤; 𝗀(x) Q ]∈𝒫_S^𝗆(x), a_iQa^⊤_i≤ 1, i∈[1,𝗂]; [ Q_o 𝗀_o(x,x̂)^⊤; 𝗀_o(x,x̂) Q_o ]∈𝒫_S^𝗆(x,x̂), b_tQ_ob^⊤_t≤ 1, t∈[1,𝗍]; for some Q∈ℝ^n × n, Q_o∈ℝ^2n × 2n, K̅(x)∈𝒫^𝗆(x), and K̅_o(x,x̂)∈𝒫^𝗆(x,x̂), with 𝗀(x):=Aℋ(x)Q+B𝒰(x)K̅(x), 𝗀_o(x,x̂):=A_o(x,x̂)Q_o+B_o(x̂)K̅'_o(x,x̂), where ℋ(x)∈𝒫(x) is as in (<ref>), A_o:=[ Aℋ(x)+B𝒰(x)K(x) 0; 0 Aℋ(x̂) ], B_o:=[ 0 0; 0 B𝒰(x̂) ], K̅'_o(x,x̂):=[0;K̅_o(x,x̂)], K(x):=K̅(x)Q^-1, and 0 are zero matrices with appropriate dimensions. Note that Assumption <ref> implies that the dt-CS Σ and its associated augmented system Σ×Σ as in (<ref>) are stabilizable. One can check Assumption <ref> by checking the feasibility of conditions (<ref>)-(<ref>) using semi-definite-programming (SDP) solver (e.g.,  <cit.>). With Assumption <ref>, the next result shows how to compute the sets 𝒮^init and 𝒮_O^init over the system as in (<ref>). Suppose that there exists positive-definite matrices Q∈ℝ^n × n, Q_o∈ℝ^2n × 2n, and matrix polynomials K̅(x)∈𝒫^𝗆(x),K̅_o(x,x̂)∈𝒫^𝗆(x,x̂) such that conditions (<ref>)-(<ref>) in Assumption <ref> hold. Then, there exists c_1,c_2∈(0,1] such that (Cond.2), (Cond.4), (Cond.5), and (Cond.6) in Definition <ref> hold, with 𝒮 := {x∈ℝ^n | x^⊤ Q^-1x-c_1≤ 0}, 𝒮_O :={(x,x̂)∈ℝ^n ×ℝ^n | [x;x̂]^⊤ Q_o^-1[x;x̂]-c_2≤ 0 }. The proof of Theorem <ref> can be found in the Appendix <ref>. Having Theorem <ref>, one obtains the initial CBF-I set 𝒮^init as in (<ref>) and initial ACBF-I set 𝒮_O^init as in (<ref>) by deploying Algorithm <ref>. As a key insight, in Algorithm <ref>, step <ref> aims at computing Q and K̅(x) in (<ref>); step <ref> aims at computing Q_o and K̅_o(x,x̂) in (<ref>); steps <ref> and <ref> are for computing c_1 and c_2 to respect the input constraints as in (<ref>). 0.5em [ht!] System as in (<ref>), sets X̅ and ℛ̅ as in (<ref>) and (<ref>), respectively. Maximal degrees of the matrix polynomials K̅(x) and K̅_o(x,x̂) in Theorem <ref>, denoted by d_k̅^max and d_k̅_o^max, respectively. Initial CBF-I set 𝒮^init= {x∈ℝ^n | x^⊤ Q^-1x -c_1≤ 0 } and the initial ACBF-I set 𝒮_O^init = {(x,x̂)∈ℝ^n ×ℝ^n | [x;x̂]^⊤ Q_o^-1[x;x̂]- c_2≤ 0}; Otherwise, the algorithm stops inconclusively if one cannot find K̅(x), Q, K̅_o(x,x̂) or Q_o for the given d_k̅^max and d_k̅_o^max. Compute K̅(x) and Q in (<ref>) and (<ref>). If K̅(x) and Q are obtained successfully, proceed to step <ref>; otherwise, increase the degree of K̅(x) (up to d_k̅^max) and repeat step <ref>. Set K(x):=K̅(x)Q^-1 with K̅(x) and Q obtained in step <ref>. Compute K̅_o(x,x̂) and Q_o in (<ref>) and (<ref>). Increase the degree of K̅_o(x,x̂) (up to d_k̅_o^max) and repeat step <ref> if K̅_o(x,x̂) and Q_o cannot be found; otherwise, proceed to step <ref>. Compute c_1 with bisection over c_1∈ (0,1] such that (<ref>) holds with u(x) = K̅(x)Q^-1x and ℬ(x) = x^⊤ Q^-1x - c_1, in which K̅(x) and Q are obtained in step <ref>. Compute c_2 with bisection over c_2∈ (0,1], such that (<ref>) holds with û=K̅_o(x,x̂)Q_o^-1[x;x̂], and ℬ_O(x,x̂)= [x;x̂]^⊤ Q^-1_o [x;x̂]- c_2, in which Q_o and K̅_o(x,x̂) are obtained in step <ref>. Computing initial CBF-I set 𝒮^init and initial ACBF-I set 𝒮_O^init for systems as in (<ref>). 0.5em Next, we revisit the running example to show how to compute sets 𝒮^init and 𝒮_O^init leveraging Theorem <ref> and Algorithm <ref>. example-1 [continued] (Running example) To compute the sets 𝒮^init and 𝒮_O^init for the running example, we select * the set X̅ as in (<ref>), with a_1:=[-0.1554 0], a_2:=[0.1347;0]^⊤, a_3:=[0;-0.2020]^⊤, and a_4:=-a_3; * the set ℛ̅ as in (<ref>), with b_1 := [1.086;0;-1.086;0]^⊤, b_2 := -b_1, and b_3 := [0;0;0.1813;0]^⊤; * candidates of K̅(x) and K̅_o(x,x̂) as in Theorem <ref>, with deg(K̅(x))=0 and deg(K̅_o(x,x̂))=0. Then, we deploy Theorem <ref> and Algorithm <ref> to compute 𝒮^init and 𝒮_O^init. Accordingly, we obtain Q = [ 40.155 -8.950; -8.950 19.845 ], K̅ = [ -0.014 -0.033 ], Q_o = [ 35.612 -13.132 32.277 -16.008; -13.132 25.424 -11.908 26.485; 32.277 -11.908 29.520 -14.952; -16.008 26.485 -14.952 29.445 ], and K̅_o = [ 48.675 16.387 -54.999 -16.276 ], with c_1=1, and c_2= 0.57. ♢ §.§.§ Iterative Scheme for Computing the Expanded CBF-I and ACBF-I Sets 0.5em [ht!] Sets 𝒮^init and 𝒮_O^init, matrices Q and Q_o, and matrix polynomials K̅(x)∈𝒫^𝗆(x),K̅_o(x,x̂)∈𝒫^𝗆(x,x̂) obtained by leveraging Theorem <ref> and Algorithm <ref>. Degrees of 𝒵(x) in (<ref>), 𝒵_o(x,x̂) in (<ref>), and those (SOS) polynomial multipliers appearing in (<ref>)-(<ref>); maximal number of iterations i_max; constant λ∈ℝ_≥ 0 selected by users. Expanded CBF-I set 𝒮^exp and ACBF-I set 𝒮^exp_O. i=1. Set [k_1(x);…;k_m(x)]:= K̅(x)Q^-1x and [k̂_1(x,x̂); …;k̂_m(x,x̂)]:=K̅_o(x,x̂)Q_o^-1[x;x̂]. i≤ i_max Fix ℬ(x), ℬ_O(x,x̂), k_a(x), and k̂_a(x,x̂), with a∈[1,m], compute the (SOS) polynomial multipliers in (<ref>), and (<ref>)-(<ref>). Fix the (SOS) polynomial multipliers in (<ref>), and (<ref>)-(<ref>) obtained in step <ref>, solve OP_1 to compute new ℬ(x), ℬ_O(x,x̂), k_a(x), and k̂_a(x,x̂), with a∈[1,m]: OP_1:min trace(P)+λtrace(P_O) (<ref>), and (<ref>)-(<ref>) hold. Check whether (<ref>) (resp. (<ref>)) is feasible. Check whether (<ref>) (resp. (<ref>)) is feasible. steps <ref> and <ref> are feasible Stop successfully. i > i_max Stop inconclusively. i=i+1. Iterative scheme for computing CBF and ACBF. 0.5em In this subsection, we proceed with discussing how to compute the expanded CBF-I set 𝒮^exp and ABCF-I set 𝒮_O^exp using Theorem <ref> based on the sets 𝒮^init and 𝒮_O^init obtained by leveraging Theorem <ref>. Since we focus on polynomial type CBF and ACBF, without loss of generality <cit.>, one can write ℬ and ℬ_O in Definition <ref> as ℬ(x) := 𝒵^⊤(x)P𝒵(x), ℬ_O(x,x̂) := 𝒵_o^⊤(x,x̂)P_o𝒵_o(x,x̂), respectively, in which 𝒵(x)∈𝒫^𝗆(x) and 𝒵_o(x,x̂)∈𝒫^𝗆(x,x̂), with P and P_o being real matrices with appropriate dimensions. Then, one can deploy Algorithm <ref> to compute the sets 𝒮^exp and 𝒮_O^exp according to Theorem <ref>. If Algorithm <ref> does not stop successfully, one may consider selecting larger iteration number i_max, or increasing the degrees of 𝒵(x) in (<ref>), 𝒵_o(x,x̂) in (<ref>), and those (SOS) polynomial multipliers appearing in (<ref>)-(<ref>). By deploying Algorithm <ref>, one can enlarge the CBF-I and ACBF-I sets characterized by 𝒮^init and 𝒮_O^init, respectively, since * by deploying the objective function of OP_1, the volumes of the sets 𝒮 and 𝒮_O tend to increase <cit.> as the number of iteration increases. * CBF ℬ(x) and ACBF ℬ_O(x,x̂) in (<ref>) and (<ref>) can be polynomial functions of order higher than 2, while Theorem <ref> only provides polynomial CBF ℬ(x) and ACBF ℬ_O(x,x̂) of order 2. § CASE STUDY To show the effectiveness of our results, we first proceed with discussing the running example with the results in Section <ref> and simulate the system using the controller proposed in Section <ref>. Then, we apply our results to a case study on controlling a satellite. §.§ Running Example (continued) Based on the initial CBF-I set 𝒮^init and the initial ACBF-I set 𝒮_O^init obtained by leveraging Theorem <ref>, we deploy the iterative scheme proposed in Section <ref> to compute the expanded CBF-I set 𝒮^exp and ACBF-I set 𝒮_O^exp. Here, we consider candidates of ℬ(x), ℬ_O(x,x̂), k(x), and k̂(x,x̂) with deg(ℬ(x))=4, deg(ℬ_O(x,x̂))=4, deg(k(x))=6, and deg(k̂(x,x̂))=4. The computation ends in 7 iterations, and the evolution of the sets 𝒮^exp and 𝒮_O^exp with respect to the number of iterations are depicted in Figure <ref> and Figure <ref>, respectively. The explicit form of the obtained CBF ℬ(x) and ACBF ℬ_O(x,x̂) associated with the final 𝒮^exp and 𝒮_O^exp, as well as the functions k_a(x) and k̂_a(x,x̂) as in Theorem <ref>, with a=1, are provided in Appendix <ref>. To validate the obtained CBF and ACBF, we deploy the controller constructed based on the CBF and ACBF as described in Section <ref>, considering the cost function as in (<ref>). We initialize the system at x(0)=[3;0], and select x̂(0)=[2.77;0]. Then, we simulate the system for 900 time steps (90 sec.). During the simulation, we change the set points as in (<ref>) from time to time. The evolution of the set points and simulation results of the car are shown in Figure <ref>. Between time step k=100 and k=305, the car enters the secrete state set X^inf_s. However, the intruder is not certain whether the car has actually entered the region X^inf_s due to the existence of 𝐱_x̂(0),ν̂ that has not entered X^inf_s. Between time step k=600 and k=825, a set point is given to drive the car away from the safety region. Thanks to the synthesized CBF, the car does not go outside of the safety region so that the desired safety property is satisfied. Meanwhile, as shown in Figure <ref>, the desired velocity and input constraint are also respected. §.§ Chaser Satellite In this case study, we focus on controlling a chaser satellite, as shown in Figure <ref>. The chaser satellite is moving around a target, while the position of the satellite is observed by a malicious intruder. It is undesired to be revealed to the intruder whether or not * the satellite started from the region X^init_s; * the satellite has ever entered the region X^inf_s. Additionally, the desired safety region is characterized by 1) a neighborhood region around the target in which the chaser satellite must stay in; 2) velocity restriction within this region. The motion of the chaser satellite can be modeled as follows, which is borrowed from <cit.>: x(k+1) = Ax(k) + Bν(k), y(k) = Cx(k) with A := [ 1 0 1.1460 0.7038; 0.8156 1 -0.0064 4.3382; 0 0 0.8660 1.1460; 0 0 -0.2182 0.8660 ], B := [ 0 0.3031; 0.006 0.1286; 0 0.0014; 0 0.0023 ] , C := [ 1 0 0 0; 0 0 1 0 ], in which x = [x_1;x_2;x_3;x_4] is the state of the system, with x_1 and x_2 (resp. x_3 and x_4) being the relative position (m) and velocity (m/s) between the satellite and the target on 𝗑-axis (resp. 𝗒-axis), respectively; u=[u_1;u_2] denotes the control input of the system, in which u_1,u_2∈ [-10,10] are the thrust force of the satellite on the 𝗑-axis and 𝗒-axis, respectively, and y is the output of the system observed by a malicious intruder, with observation precision δ =2. Here, we are interested in state set X := [-45, 45] × [-10,10]× [-20,20]× [-10,10], initial set X_0 := [3.6, 4.3] ×{0}× [3.7, 4.5] ×{0}, secret sets X^init_s:= [4, 4.3] ×{0}× [3.7, 4.5] ×{0} and X^inf_s:= [29, 35] × [-5, 5]× [10,12]× [-5, 5], and unsafe set X_d:=X∖( [-35, 35] × [-5,5]× [-12,12]× [-5,5] ). Considering ϵ = 0.01, we construct the sets ℛ^init_0 as in (<ref>), ℛ^init_d as in (<ref>), ℛ^inf_0 as in (<ref>), and ℛ^inf_d as in (<ref>). Accordingly, as for the desired δ-approximate initial-state opacity, we select ℛ_0 satisfying (<ref>), denoted by ℛ'_0, as: {x ∈ X^init_s, x̂∈ [-3.99, 0)×{0}× [3.99, 4.01)×{0}| ||h(x) - h(x̂)|| ≤δ - ϵ}. As for the δ-approximate infinite-step opacity, we choose ℛ_0 satisfying (<ref>), denoted by ℛ”_0, as: {(x,x̂)∈ (X_0∖ X^init_s) × (X_0∖ X^init_s) | ||h(x) - h(x̂)|| ≤δ-ϵ}. To compute the sets 𝒮^init and 𝒮_O^init, we choose * the set X̅ as in (<ref>), with a_1:=[0.0289;0;0;0]^⊤, a_2:=[0;0.2020;0;0]^⊤, a_3:=[0;0;0.0842;0]^⊤, a_4:=[0;0;0;0.2020]^⊤, a_5:=-a_1, a_6:=-a_2, a_7:=-a_3, and a_8:=-a_4; * the set ℛ̅ as in (<ref>), with b_1 := [0.7107; 0;0;0; -0.7107; 0; 0; 0]^⊤, b_2 := [0; 0; 0.7107; 0;0;0;-0.7107; 0]^⊤, b_3 := [0; 0; 0; 0;0;0;0.1;0]^⊤, b_4 := -b_1, and b_5 := -b_2; * candidates of K̅(x) and K̅_o(x,x̂) as in Theorem <ref>, with deg(K̅(x))=0 and deg(K̅_o(x,x̂))=0. Accordingly, we compute 𝒮^init and 𝒮_O^init by leveraging Theorem <ref> and Algorithm <ref>, and we obtain K̅ = [ -1.259 -0.844 -0.363 -7.442; -5.242 -3.2554 -1.417 -31.044 ], Q = [ 874.279 -29.878 -8.132 -144.055; -29.878 23.631 -42.309 4.542; -8.132 -42.309 127.878 -0.025; -144.055 4.542 -0.025 24.213 ], with c_1=c_2= 1. Matrices Q_o and K̅_o are provided in Appendix <ref>. Having the initial CBF-I set 𝒮^init and the initial ACBF-I set 𝒮_O^init obtained by leveraging Theorem <ref>, we then deploy the iterative scheme proposed in Section <ref> to compute the expanded CBF-I set 𝒮^exp and the expanded ACBF-I set 𝒮_O^exp. Here, we consider candidates of ℬ(x), ℬ_O(x,x̂), k_a(x), and k̂_a(x,x̂), in which a∈{1,2}, with deg(ℬ(x))=4, deg(ℬ_O(x,x̂))=2, deg(k_a(x))=2, and deg(k̂_a(x,x̂))=2. The computation ends in 5 iterations. Here, we depict the evolution of the sets 𝒮^exp with respect to the number of iterations in Figure <ref>. Accordingly, the CBF ℬ(x) and the ACBF ℬ_O(x,x̂) associated with the final 𝒮^exp and 𝒮_O^exp, as well as the functions k_a(x) and k̂_a(x,x̂), as in Theorem <ref>, with a∈{1,2}, are provided in Appendix <ref>. To validate the obtained CBF and ACBF, we randomly select 100 initial states x(0) from the initial state set X_0 and simulated the system for 50 time steps. For those x(0)∈ X_0∩ X^init_s, we randomly select x̂(0) such that (x(0),x̂(0))∈ℛ_0, with ℛ_0 as in (<ref>); for those x(0)∈ X_0\ X_s, we set x̂(0)=x(0) considering the settings in (<ref>) and (<ref>). Moreover, the secure-by-construction controller k(x) associated with the CBF ℬ(x) and the ACBF ℬ_o(x,x̂) is deployed to control the chaser satellite in the simulation. The state trajectories of the satellite and the corresponding input sequences ν(k), are depicted in Figure <ref>, indicating that the desired safety properties and input constraints are respected. Additionally, the desired 2-approximate initial-state and infinite-step opacity are satisfied since for each collected x(0) and its corresponding trajectory 𝐱_x(0),ν, there exists x̂(0)∈ X_0\ X_s and trajectory 𝐱_x̂(0),ν̂, in which ν̂(k)∈ U, such that ‖ h(𝐱_x(0),ν(k))-h(𝐱_x̂(0),ν̂(k))‖≤ 2 holds for all k∈[0,50]. Additionally, all the trajectory pairs (𝐱_x(0),ν,𝐱_x̂(0),ν̂) never reach ℛ^inf_d. Here, we demonstrate in Figure <ref> an example of such a pair of trajectories. Additionally, we also depict the sequence ν̂ associated with 𝐱_x̂(0),ν̂ in Figure <ref>. § CONCLUSION In this paper, we proposed a notion of (augmented) control barrier functions to construct secure-by-construction controllers enforcing safety and security properties over control systems simultaneously. Concretely, we considered δ-approximate initial-state opacity and δ-approximate infinite-step opacity as the desired security properties, and invariance properties as the desired safety properties. Accordingly, we proposed conditions under which the (augmented) control barrier functions and their associated secure-by-construction controllers can be synthesized. Given valid (augmented) control barrier functions, we introduced how to incorporate user-defined cost functions when designing the secure-by-construction controllers. Additionally, we proposed an iterative sum-of-square programming to systematically compute (augmented) control barrier functions over polynomial control-affine systems. The effectiveness of our methods was shown by two case studies. As a future direction, we will investigate how to handle the secure-by-construction controller synthesis problem for large-scale CPS efficiently. Moreover, we are also interested to extend the current results by considering more general safety properties (e.g., those expressed as linear temporal logic formulae <cit.>, instead of simple invariance properties), and by exploring other notions of security properties <cit.> for complex CPS. IEEEtran § PROOF OF STATEMENTS §.§ Proof of the Results in Section <ref> Proof of Theorem <ref>. We first show that C(x) enforces the desired safety property as in Definition <ref>. Consider the functions ℬ and C such that conditions (Cond.1) and (Cond.5) in Definition <ref> hold, then, starting from any initial state x_0∈ X_0, one has 𝐱_x_0,ν(k)∈𝒮, for all k∈ℕ, with ν being generated by C. On the other hand, (Cond.2) ensures that 𝒮∩ X_d = ∅, indicating that from any initial state x_0∈ X_0, one has 𝐱_x_0,ν(k)∉ X_d, for all k∈ℕ. Hence, C(x) enforces the desired safety properties in Problem <ref>. Next, we proceed with showing that C(x) also enforces the desired approximate initial-state opacity property as in Definition <ref>. Consider any arbitrary secret initial state x_0 ∈ X_0 ∩ X^init_s. By the assumption as in (<ref>) that {x ∈ X_0 | ‖ h(x) - h(x_0)‖≤δ}⊈ X^init_s for all x_0 ∈ X_0 ∩ X^init_s, we get that there exists x̂_0 ∈ X_0 ∖ X^init_s such that ‖ h(x̂_0) - h(x_0)‖≤δ. This indicates that the set ℛ^init_0 in (<ref>) is not empty. Notice that since set ℛ_0 ≠∅ satisfies Proj(ℛ^init_0) ⊆Proj(ℛ_0) as in (<ref>), we get that for any x_0∈ X_0 ∩ X^init_s, there exists x̂_0 ∈ℝ^n such that (x_0, x̂_0) ∈ℛ_0. Moreover, given that Proj(ℛ_0) ⊆Proj(ℛ^init_0) as in (<ref>), we further have x̂_0 ∈ X_0 ∖ X^init_s holds. Therefore, by (Cond.3), we get that ∀ x_0∈ X_0 ∩ X^init_s, ∃x̂_0 ∈ X_0 ∖ X^init_s, s.t. ‖ h(x_0) - h(x̂_0)‖≤δ, with (x_0,x̂_0)∈𝒮_O. Moreover, since we further have (Cond.6) holds, for (x_0,x̂_0)∈𝒮_O, by deploying u_0∈ C(x_0), there exists û_0∈ U such that (x_1,x̂_1)∈𝒮_O, with x_1 = f(x_0,u_0) and x̂_1=f(x̂_0,û_0). By induction, (Cond.6) implies that if one has (x_0,x̂_0)∈𝒮_O, there exists input runs ν:=(u_0,…,u_k,…) and ν̂:=(û_0,…,û_k,…), such that one gets (𝐱_x_0,ν(k),𝐱_x̂_0,ν̂(k))∈𝒮_O, for all k∈ℕ, in which ν is generated by controller C. On the other hand, (Cond.4) indicates that for all (x,x̂)∈𝒮_O, one gets (x,x̂)∉ℛ_d = ℛ^init_d, which implies that ‖ h(x)-h(x̂) ‖≤δ by the structure of ℛ^init_d as in (<ref>). Therefore, one can conclude that for any x_0 ∈ X_0 ∩ X^init_s, for all 𝐱_x_0,ν with ν being generated by C(x), there exists 𝐱_x̂_0,ν̂ with x̂_0 ∈ X_0 ∖ X^init_s such that one has ‖ h(𝐱_x_0,ν(k))-h(𝐱_x̂_0,ν̂(k))‖≤δ. This indicates that C(x) enforces the approximate initial-state opacity property as in Problem <ref> as well, which completes the proof. ▪ Proof of Corollary <ref>. Note that (<ref>) holds indicates that (<ref>) holds. The rest of the proof can be formulated similarly to that for Theorem <ref>. ▪ Proof of Theorem <ref>. First note that by following the same proof of Theorem <ref>, a controller C(x) satisfying (Cond.1), (Cond.2) and (Cond.5) as in Definition <ref> enforces the safety property as in Problem <ref>. Next, we show that with sets ℛ^inf_0 and ℛ^inf_d defined as in (<ref>) and (<ref>), respectively, C(x) also enforces approximate infinite-step opacity of Σ as in Definition <ref>. Consider any arbitrary initial state x_0 ∈ X_0 and an arbitrary input sequence ν generated by the controller C, and the corresponding state trajectory 𝐱_x_0,ν=(x_0,…, x_n) such that x_k ∈ X^inf_s for some k ∈ [0,n]. Next, we consider the following two cases: * If k=0, then we have x_0 ∈ X^inf_s. By the assumption as in (<ref>) that {x ∈ X_0 | ‖ h(x) - h(x_0)‖≤δ}⊈ X^inf_s for all x_0 ∈ X_0 ∩ X^inf_s, we get that there exists x̂_0 ∈ X ∖ X^inf_s such that ‖ h(x) - h(x_0)‖≤δ. This implies that ℛ^inf_0 in (<ref>) is not empty. Note that set ℛ_0 ≠∅ satisfies Proj(ℛ^inf_0) ⊆Proj(ℛ_0) as in (<ref>), we get that for any x_0 ∈ X_0 ∩ X^inf_s, there exists x̂_0 ∈ℝ^n such that (x_0, x̂_0) ∈ℛ_0. Moreover, since Proj(ℛ_0) ⊆Proj(ℛ^inf_0) holds as in (<ref>), we further have x̂_0 ∈ X ∖ X^inf_s. Therefore, by (Cond.3), we get that ∀ x_0∈ X_0 ∩ X^inf_s, ∃x̂_0 ∈ X_0 ∖ X^inf_s, s.t. ‖ h(x_0) - h(x̂_0)‖≤δ, with (x_0,x̂_0)∈𝒮_O. Furthermore, since (Cond.6) holds, for (x_0,x̂_0)∈𝒮_O, by deploying u_0∈ C(x_0), there exists û_0∈ U such that (x_1,x̂_1)∈𝒮_O, with x_1 = f(x_0,u_0) and x̂_1=f(x̂_0,û_0). By induction, (Cond.6) implies that if one has (x_0,x̂_0)∈𝒮_O, there exists input runs ν:=(u_0,…,u_k,…) and ν̂:=(û_0,…,û_k,…), such that one gets (𝐱_x_0,ν(k),𝐱_x̂_0,ν̂(k))∈𝒮_O, for all k∈ℕ, in which ν is generated by controller C. Moreover, (Cond.4) indicates that for all (x,x̂)∈𝒮_O, one gets (x,x̂)∉ℛ_d = ℛ^inf_d, which implies that ‖ h(x)-h(x̂) ‖≤δ by the structure of ℛ^inf_d as in (<ref>). Therefore, one can conclude that for any x_0 ∈ X_0 ∩ X^inf_s, for all 𝐱_x_0,ν with ν being generated by C(x), there exists 𝐱_x̂_0,ν̂ with x̂_0 ∈ X_0 ∖ X^inf_s such that one has ‖ h(𝐱_x_0,ν(k))-h(𝐱_x̂_0,ν̂(k))‖≤δ. * If k≥ 1, then we have x_0 ∈ X_0 ∖ X^inf_s. One can readily get that there exists x̂_0 ∈ X_0 ∖ X^inf_s, with ‖ h(x_0) - h(x̂_0)‖≤δ, such that (x_0, x̂_0) ∈ℛ^inf_0, with ℛ^inf_0 defined as in (<ref>). Moreover, (<ref>) implies that (x_0, x̂_0) ∈ℛ_0 so that one has (x_0,x̂_0)∈𝒮_O by leveraging (Cond.3). Then, since we have (Cond.6) holds, for the pair of states (x_0,x̂_0)∈𝒮_O, by deploying u_0∈ C(x_0), there exists û_0∈ U such that (x_1,x̂_1)∈𝒮_O, with x_1 = f(x_0,u_0) and x̂_1=f(x̂_0,û_0). By induction, (Cond.6) implies that if one has (x_0,x̂_0)∈𝒮_O, there exists input runs ν:=(u_0,…,u_k,…) and ν̂:=(û_0,…,û_k,…), such that one gets (𝐱_x_0,ν(k),𝐱_x̂_0,ν̂(k))∈𝒮_O, for all k∈ℕ, in which ν is generated by controller C. Moreover, (Cond.4) indicates that for all (x,x̂)∈𝒮_O, one gets (x,x̂)∉ℛ_d = ℛ^inf_d, which implies that ‖ h(x)-h(x̂) ‖≤δ by the structure of ℛ^inf_d as in (<ref>). Note that for the secret state x_k = 𝐱_x_0,ν(k) ∈ X^inf_s, by the structure of ℛ^inf_d in (<ref>), it follows that 𝐱_x̂_0,ν̂(k) ∈ X ∖ X^inf_s with ‖ h(𝐱_x_0,ν(k))-h(𝐱_x̂_0,ν̂(k)) ‖≤δ. Therefore, one can conclude that for any x_0 ∈ X_0, for all state trajectory 𝐱_x_0,ν such that x_k ∈ X^inf_s, with ν being generated by C(x), there exists 𝐱_x̂_0,ν̂ with x̂_0 ∈ X_0 and x̂_k ∈ X ∖ X^inf_s such that one has ‖ h(𝐱_x_0,ν(k))-h(𝐱_x̂_0,ν̂(k))‖≤δ for all k ∈ℕ. This indicates that C(x) enforces the approximate infinite-step opacity property as in Problem <ref> as well, which completes the proof. ▪ Proof of Corollary <ref>: Here, we show Corollary <ref> by showing that for all x_0∈ X_0, there exist x̂_0∈ X_0 and 𝐱_x̂_0,ν, such that 𝐱_x_0,ν(k)∉ X_d,∀ k∈ℕ; ‖ h(𝐱_x_0,ν(k))-h(𝐱_x̂_0,ν̂(k))‖≤δ,∀ k∈ℕ; x_0 ∈ X^init_s x̂_0 ∉ X^init_s; 𝐱_x_0,ν(k) ∈ X^inf_s 𝐱_x̂_0,ν(k) ∉ X^inf_s ,∀ k∈ℕ; in which ν(k):=u and ν̂(k):=û are computed by solving the following optimization problem OP at each time step k∈ℕ. As a key insight, (<ref>) corresponds to the desired safety property as in Problem <ref>,  (<ref>) and (<ref>) correspond to the desired opacity properties associated with X^init_s, while (<ref>) together with (<ref>) indicate that the desired opacity properties associated with X^inf_s are satisfied. Firstly, ∀ x_0∈ X_0, one has x_0 ∈𝒮 by (Cond.1). Moreover, by applying input ν(k) satisfying ℬ(f(𝐱_x_0,ν(k),ν(k))) ≤ 0 for all k∈ℕ (existence of such ν(k) is guaranteed by (Cond.5)), one has 𝐱_x_0,ν(k) ∈𝒮 for all k∈ℕ considering and (Cond.5). Meanwhile, since one has 𝒳∩ X_d = ∅ according to (Cond.2), one has (<ref>) holds. Next, we proceed with discussing (<ref>)-(<ref>): * If x(0) = x_0∈ X_0∩ X^inf_s, one can select x̂(0)∈ X_0\ X^inf_s, such that ‖ h(x(0))-h(x̂(0))‖≤δ and (x(0),x̂(0))∈𝒮_O considering (Cond.3) and (<ref>). On one hand, by applying input ν(k) (and ν̂(k)) satisfying ℬ(f(𝐱_x_0,ν(k),ν(k)))≤ 0 and ℬ_O(f(𝐱_x_0,ν(k),ν(k)),f(𝐱_x̂_0,ν(k)(k),ν̂(k)))≤ 0 (existence of such ν(k) and ν̂(k) is guaranteed by (Cond.5) and (Cond.6)), one has (𝐱_x_0,ν(k),𝐱_x̂_0,ν(k)(k))∈𝒮_O holds for all k∈ℕ according to (Cond.5) and (Cond.6). One the other hand, one has 𝒮_O ∩ℛ_d=∅ due to (Cond.4). Therefore, considering (<ref>), one has (<ref>) and (<ref>) hold for all k∈ℕ. * If x(0) =x_0∈ X_0\ X^inf_s, there exists x̂(0) ∈ X_0, with ‖ h(x(0))-h(x̂(0))‖≤δ and (x(0),x̂(0))∈𝒮_O considering (Cond.3) and (<ref>). Then, one can show (<ref>) and (<ref>) also hold for all k∈ℕ in a similar way as in the case in which x(0) ∈ X_0∩ X^inf_s. * If x(0) = x_0∈ X_0∩ X^init_s, one can select x̂(0)∈ X_0\ X^init_s, such that ‖ h(x(0))-h(x̂(0))‖≤δ and (x(0),x̂(0))∈𝒮_O considering (Cond.3) and (<ref>). Then, one can show (<ref>) and (<ref>) also hold for all k∈ℕ in a similar way as in the case in which x(0) ∈ X_0∩ X^inf_s. The proof can then be completed by summarizing the discussion above. ▪ §.§ Proof of the Results in Section <ref> Proof of Theorem <ref>. In general, we prove Theorem <ref> by showing: 1) (<ref>)-(<ref>) imply (Cond.1)-(Cond.4) in Definition <ref>, respectively; 2) (<ref>) and (<ref>) imply (Cond.5) in Definition <ref> holds, while (<ref>) and (<ref>) ensure that (Cond.6) in Definition <ref> is satisfied. Firstly, one can prove (<ref>) ⇒ (Cond.1), and (<ref>) ⇒ (Cond.3) can be shown with the same ideas. If (<ref>) holds, then ∀𝖺∈[1, n_𝖺], one has -ℬ(x)≥∑_k=1^k_0(𝖺) s_d,k(𝖺)α_𝖺,k(x)≥ 0, ∀ x ∈ X_0,𝖺. The second inequality hold since ∀ k∈[1,k_0(𝖺)], s_d,k(𝖺)≥ 0 hold ∀ x ∈ℝ^n. In other words, one has x∈ X_0⇒ x∈𝒮, indicating that (Cond.1) holds. Secondly, we show (<ref>) ⇒ (Cond.2), while (<ref>) ⇒ (Cond.4), can also be proved similarly. One can verify (Cond.2) holds if ∀𝖼∈[1,n_𝖼], {γ_𝖼,k(x)≥ 0,∀ k∈[1,k_d(𝖼)], ℬ(x)≤ 0}=∅. According to Positivstellensatz <cit.>, one has (<ref>) holds if and only if there exists s_i_0,…,i_k_d(𝖼)∈𝒫_S(x) such that ∑_i_0,…,i_k_d(𝖼)∈{0,1}s_i_0,…,i_k_d(𝖼)(-ℬ(x))^i_0γ_𝖼,1^i_1(x)⋯γ_𝖼,k_d(𝖼)^i_k_d(𝖼)(x)∈𝒫_S(x). By selecting s_i_0,…,i_k_d(𝖼)=0 if ∑_r=1^k_d(𝖼)i_r>1, one gets the relaxation as s - s_0 ℬ(x) + ∑_k=1^k_d(𝖼)s_kγ_𝖼,k(x)∈𝒫_S(x), with s,s_k∈𝒫_S(x), ∀ k∈[0,k_d(𝖼)]. Note that (<ref>) holds for all 𝖼∈[1,n_𝖼] if and only if (<ref>) holds. Finally, we show that (<ref>) and (<ref>) imply (Cond.5) in Definition <ref> holds, while (<ref>) and (<ref>) ensure that (Cond.6) in Definition <ref> is satisfied. On one hand, one can verify that (Cond.5) and (Cond.6) in Definition <ref> hold if and only if (<ref>)-(<ref>) hold. On the other hand, if (<ref>)-(<ref>) hold, then (<ref>)-(<ref>) hold. Here, we show that (<ref>)-(<ref>) implies (<ref>)-(<ref>), respectively. *  (<ref>) ⇒ (<ref>): With Positivstellensatz <cit.>,  (<ref>) holds if and only if s_b,0-s_b,1ℬ(x)+s_b,2ℬ(f(x,u))-s_b,3ℬ(x)ℬ(f(x,u))+∑_a=1^mp_b(a)(u_a - k_a(x))=0 holds, with s_b,0,s_b,1,s_b,2,s_b,3∈𝒫_S(x,u) and p_b(a)∈𝒫(x,u), ∀ a ∈ [1,m]. By selecting s_b,3=0, one has (<ref>) holds if and only if (<ref>) holds. *  (<ref>) ⇒ (<ref>): Similarly, (<ref>) holds if and only if ∀ j ∈ [1,𝗃], ∃ s_c,i(j) ∈𝒫_S(x,u), with i∈[0,3], ∃ k∈ℕ, and ∃ p_c(a,j)∈𝒫(x,u), with a∈ [1,m] such that s_c,0(j) - s_c,1(j)ℬ(x)+s_c,2(j)ρ_j(u)-s_c,3(j)ρ_j(u)ℬ(x)+∑_a=1^mp_c(a,j)(u_a -k_a(x)) +ρ_j(u)^2k=0, which holds if and only if (<ref>) holds. Then, the proof can be completed by showing (<ref>)⇒(<ref>) and (<ref>)⇒(<ref>) in similar ways. ▪ Proof of Theorem <ref>: Firstly, according to <cit.>, one can verify {x∈ℝ^n|x^⊤ Q^-1x≤ 1}⊆X̅ and {(x,x̂)∈ℝ^n×ℝ^n|[x;x̂]^⊤ Q_o^-1[x;x̂]≤ 1}⊆R̅ hold if and only if (<ref>) and (<ref>) hold, respectively. Therefore, one gets 𝒮⊆ X\ X_d and 𝒮_O⊆ℛ\ℛ_d with 𝒮 :={x∈ℝ^n|x^⊤ Q^-1x≤ c_1}, 𝒮_O := {(x,x̂)∈ℝ^n ×ℝ^n | [x;x̂]^⊤ Q_o^-1[x;x̂]≤ c_2 }, for all c_1,c_2∈(0,1]. Therefore, one has (Cond.2) and (Cond.4) hold with 𝒮 and 𝒮_O as in (<ref>) and (<ref>). Finally, we show that (<ref>) and (<ref>) imply (Cond.5) and (Cond.6), respectively. *  (<ref>) ⇒ (Cond.5): We show (<ref>) implies that there exists c_1∈(0,1] such that (Cond.5) holds with 𝒮 as in (<ref>), when u(x):=K̅(x)Q^-1x is applied for all x∈𝒮. Consider a controller u(x):=K(x)x, with K(x)∈𝒫(x). For all x∈𝒮, one has Aℋ(x)x+B𝒰(x)u∈𝒮 if P - (Aℋ(x)+B𝒰(x)K(x))^⊤P(Aℋ(x)+B𝒰(x)K(x))∈𝒫_S^𝗆(x) holds, with P=Q^-1, or, equivalently, Q - (A ℋ(x)Q+B𝒰(x)K̅(x))^⊤P(Aℋ(x)Q+B𝒰(x)K̅(x))∈𝒫_S^𝗆(x), holds, with K̅(x)=K(x)Q. Then, considering the Schur complement <cit.> of Q of the matrix on the left-hand-side of (<ref>), one has (<ref>) holds if (<ref>) holds. Since 0_n∈𝒮, and u(x)=K(x)x=0 when x=0_n, there must exist c_1∈ℝ_>0 such that u(x)∈ U holds ∀ x ∈𝒮. Hence, one has (<ref>) ⇒ (Cond.5). *  (<ref>) ⇒ (Cond.6): Next, we show (<ref>) implies that there exist c_2∈(0,1] such that (Cond.6) holds with 𝒮_O as in (<ref>), when u(x):=K̅(x)Q^-1x and û(x,x̂):=K̅_o(x,x̂)Q_o^-1[x;x̂] are applied for all [x;x̂]∈𝒮_O. According to <cit.>, if (<ref>) holds, one has Q_o -(A_o(x,x̂)Q_o+B_o(x̂)K̅'_o(x,x̂))^⊤Q_o^-1(A_o(x,x̂)Q_o+B_o(x̂)K̅'_o(x,x̂))∈𝒫_S^𝗆(x,x̂), which is equivalent to Q^-1_o - (A_o(x,x̂)+B_o(x̂)K'_o(x,x̂))^⊤Q_o^-1 (A_o(x,x̂)+B_o(x̂)K'_o(x,x̂))∈𝒫_S^𝗆(x,x̂), with K'_o(x,x̂):=K̅'_o(x,x̂)Q^-1_o. On one hand, if (<ref>) holds, then given any c_2∈(0,1], for all [x;x̂]∈ℝ^2n satisfying [x;x̂]^⊤ Q^-1_o[x;x̂]≤ c_2, one has [x^+;x̂^+]^⊤ Q^-1_o[x^+;x̂^+]≤ c_2, in which x^+=Aℋ(x)x+B𝒰(x)K̅(x)Q^-1x and x̂^+=Aℋ(x̂)x̂+B𝒰(x̂)K̅_o(x,x̂)Q_o^-1x̂. On the other hand, similar to the previous case, there exists c_2∈ℝ_>0 such that û(x,x̂)∈ U since 0_2n∈𝒮_O. Therefore, one can conclude that (<ref>) implies that (Cond.6). §.§ CBF and ACBF for the running example and the case study of the chaser satellite In this subsection, we are summarizing those expressions for computing the initial (resp. expanded) CBF-I set 𝒮^init (resp. 𝒮^exp) and the initial (resp. expanded) ACBF-I set 𝒮_O^init (resp. 𝒮_O^exp), which are omitted in the main text. Note that those parameters that are smaller than 10^-4 are neglected for simple presentation. Running example The CBF ℬ(x), ACBF ℬ_O(x,x̂), functions k(x) and k̂(x,x̂) associated with the expanded 𝒮^exp and 𝒮_O^exp for the running example are as follows. ℬ(x) =0.5688-0.6237 x_1+0.2971 x_2-0.6125 x_1^2-1.7204 x_1 x_2-2.4687 x_2^2+0.0162 x_1^3+0.0093 x_1^2 x_2 +0.0296 x_1 x_2^2-0.0145 x_2^3+0.0174 x_1^4+0.0649 x_1^3 x_2+0.1433 x_1^2 x_2^2+0.1533 x_1 x_2^3+0.1353 x_2^4; ℬ_O(x,x̂) =0.5633-0.0006 x_1-0.0012 x_2-0.0018 x̂_1-0.0002 x̂_2+0.0138 x_1^2-0.0201 x_1 x_2-0.0413 x_2^2 -0.1231 x_1 x̂_1-0.0482 x_2 x̂_1+0.0029 x̂_1^2-0.0278 x_1 x̂_2-0.045 x_2 x̂_2-0.0061 x̂_1 x̂_2-0.0065 x̂_2^2 +0.001 x_1^3+0.0012 x_1^2 x_2-0.0014 x_1 x_2^2-0.0011 x_1^2 x̂_1-0.0013 x_1 x_2 x̂_1+0.0012 x_2^2 x̂_1-0.0012 x_1 x̂_1^2 +0.0001 x_2 x̂_1^2+0.0031 x̂_1^3-0.0004 x_1^2 x̂_2+0.0005 x_1 x_2 x̂_2+0.0001 x_2^2 x̂_2-0.0001 x_1 x̂_1 x̂_2 +0.0013 x̂_1^2 x̂_2+0.0007 x_1 x̂_2^2+0.0004 x_2 x̂_2^2-0.0004 x̂_1 x̂_2^2-0.0004 x̂_2^3+0.0349 x_1^4+0.0515 x_1^3 x_2 +0.0511 x_1^2 x_2^2+0.0151 x_1 x_2^3+0.0215 x_2^4+0.001 x_1^3 x̂_1-0.0153 x_1^2 x_2 x̂_1-0.0514 x_1 x_2^2 x̂_1+0.0135 x_2^3 x̂_1 -0.0699 x_1^2 x̂_1^2-0.0707 x_1 x_2 x̂_1^2+0.0434 x_2^2 x̂_1^2-0.0145 x_1 x̂_1^3+0.0338 x_2 x̂_1^3+0.0523 x̂_1^4+0.0112 x_1^3 x̂_2 +0.0491 x_1^2 x_2 x̂_2-0.0037 x_1 x_2^2 x̂_2-0.0133 x_2^3 x̂_2-0.0392 x_1^2 x̂_1 x̂_2-0.1664 x_1 x_2 x̂_1 x̂_2-0.0284 x_2^2 x̂_1 x̂_2 -0.011 x_1 x̂_1^2 x̂_2+0.049 x_2 x̂_1^2 x̂_2+0.0455 x̂_1^3 x̂_2+0.0269 x_1^2 x̂_2^2-0.0066 x_1 x_2 x̂_2^2-0.0164 x_2^2 x̂_2^2+0.011 x̂_2^4 -0.0316 x_1 x̂_1 x̂_2^2-0.0032 x_2 x̂_1 x̂_2^2+0.0404 x̂_1^2 x̂_2^2-0.0043 x_1 x̂_2^3+0.0005 x_2 x̂_2^3+0.0235 x̂_1 x̂_2^3; k(x) =-0.004-0.5205 x_1-0.9476 x_2+0.0075 x_1^2+0.1525 x_1 x_2+0.0872 x_2^2+0.0429 x_1^3+0.1126 x_1^2 x_2 +0.0852 x_1 x_2^2+0.12 x_2^3+0.0003 x_1^4-0.0134 x_1^3 x_2-0.0274 x_1^2 x_2^2-0.0287 x_1 x_2^3-0.0137 x_2^4-0.0012 x_1^5 -0.0042 x_1^4 x_2-0.0068 x_1^3 x_2^2-0.0109 x_1^2 x_2^3-0.0065 x_1 x_2^4-0.0064 x_2^5+0.0002 x_1^5 x_2+0.0008 x_1^4 x_2^2 +0.0015 x_1^3 x_2^3+0.0018 x_1^2 x_2^4+0.0012 x_1 x_2^5+0.0005 x_2^6; k̂(x,x̂) =0.0033+1.9794 x_1+1.3667 x_2-1.9404 x̂_1-1.9815 x̂_2-0.0613 x_1^2+0.1014 x_1 x_2+0.076 x_2^2 -0.119 x_1 x̂_1-0.0608 x_2 x̂_1+0.1039 x̂_1^2-0.0119 x_1 x̂_2-0.1312 x_2 x̂_2-0.0957 x̂_1 x̂_2+0.0283 x̂_2^2 -0.177 x_1^3-0.2866 x_1^2 x_2-0.3162 x_1 x_2^2-0.1003 x_2^3+0.2124 x_1^2 x̂_1+0.4715 x_1 x_2 x̂_1+0.1037 x_2^2 x̂_1 +0.1796 x_1 x̂_1^2-0.1623 x_2 x̂_1^2-0.2304 x̂_1^3+0.1444 x_1^2 x̂_2+0.5553 x_1 x_2 x̂_2+0.2728 x_2^2 x̂_2-0.1237 x_1 x̂_1 x̂_2 -0.2061 x_2 x̂_1 x̂_2-0.0491 x̂_1^2 x̂_2-0.2777 x_1 x̂_2^2-0.2637 x_2 x̂_2^2+0.1323 x̂_1 x̂_2^2+0.0954 x̂_2^3+0.0235 x_1^4 +0.0253 x_1^3 x_2+0.0753 x_1^2 x_2^2+0.0339 x_1 x_2^3+0.0398 x_2^4-0.0116 x_1^3 x̂_1+0.0219 x_1^2 x_2 x̂_1-0.0542 x_1 x_2^2 x̂_1 -0.0198 x_2^3 x̂_1+0.0192 x_1^2 x̂_1^2-0.1257 x_1 x_2 x̂_1^2-0.0177 x_2^2 x̂_1^2-0.0944 x_1 x̂_1^3+0.0786 x_2 x̂_1^3+0.0656 x̂_1^4 -0.0244 x_1^3 x̂_2-0.1104 x_1^2 x_2 x̂_2-0.0848 x_1 x_2^2 x̂_2-0.1286 x_2^3 x̂_2+0.0306 x_1^2 x̂_1 x̂_2+0.0554 x_1 x_2 x̂_1 x̂_2 +0.0707 x_2^2 x̂_1 x̂_2+0.0178 x_1 x̂_1^2 x̂_2+0.0453 x_2 x̂_1^2 x̂_2-0.0205 x̂_1^3 x̂_2+0.0536 x_1^2 x̂_2^2+0.0381 x_1 x_2 x̂_2^2 +0.157 x_2^2 x̂_2^2-0.028 x_1 x̂_1 x̂_2^2-0.0535 x_2 x̂_1 x̂_2^2-0.0142 x̂_1^2 x̂_2^2+0.0032 x_1 x̂_2^3-0.0897 x_2 x̂_2^3 +0.0149 x̂_1 x̂_2^3+0.022 x̂_2^4. Case study of chaser satellite After computing the initial ACBF-I set 𝒮_O^init for the chaser satellite case study, matrices K̅ and Q_0 are as follows: K̅_o = [ 0.136 -0.003 0.215 -0.274 -0.146 0 -0.214 0.220; 2.704 -0.657 5.223 -8.853 -4.164 -0.001 -5.476 0.163 ], Q_o = [ 654.4078 -58.657 -6.321 -102.935 649.790 -62.536 -6.598 -103.187; -58.657 97.155 -28.654 2.918 -57.566 94.343 -28.670 3.177; -6.321 -28.654 92.077 -0.0244 -6.136 -34.657 92.152 -0.007; -102.935 2.918 -0.0244 17.443 -102.570 3.354 -0.0423 17.457; 649.790 -57.566 -6.136 -102.570 647.025 -121.345 -6.853 -102.960; -62.536 94.343 -34.657 3.354 -121.345 198380.007 -28.577 3.937; -6.598 -28.670 92.152 -0.043 -6.853 -28.5774 94.052 -0.028; -103.187 3.177 -0.007 17.457 -102.960 3.937 -0.028 17.833 ]. The CBF ℬ(x), ACBF ℬ_O(x,x̂), functions k_1(x), k_2(x), k̂_1(x,x̂), and k̂_2(x,x̂) associated with the expanded 𝒮^exp and 𝒮_O^exp for the chaser satellite case study are as follows: ℬ(x) =0.9978+0.0068 x_1+0.006 x_2+0.0101 x_3+0.0505 x_4-0.0566 x_1^2-0.007 x_1 x_2-0.0271 x_2^2 -0.0297 x_1 x_3-0.0285 x_2 x_3-0.152 x_3^2-0.5124 x_1 x_4-0.0041 x_2 x_4-0.2534 x_3 x_4-1.4303 x_4^2 -0.0002 x_1^3-0.0005 x_1^2 x_2-0.0006 x_1 x_2^2-0.0002 x_2^3-0.0003 x_1^2 x_3-0.0003 x_1 x_2 x_3-0.0005 x_2^2 x_3 -0.0001 x_2 x_3^2-0.0001 x_3^3-0.0043 x_1^2 x_4-0.0061 x_1 x_2 x_4-0.0035 x_2^2 x_4-0.0038 x_1 x_3 x_4-0.0021 x_2 x_3 x_4 -0.0005 x_3^2 x_4-0.026 x_1 x_4^2-0.0175 x_2 x_4^2-0.0114 x_3 x_4^2-0.0533 x_4^3+0.0017 x_1^4+0.0025 x_1^3 x_2 +0.0032 x_1^2 x_2^2+0.0004 x_1 x_2^3+0.0012 x_2^4+0.0009 x_1^3 x_3+0.0019 x_1^2 x_2 x_3+0.0017 x_1 x_2^2 x_3+0.001 x_2^3 x_3 +0.0054 x_1^2 x_3^2+0.0021 x_1 x_2 x_3^2+0.0056 x_2^2 x_3^2+0.001 x_1 x_3^3+0.0006 x_2 x_3^3+0.0011 x_3^4+0.0355 x_1^3 x_4 +0.039 x_1^2 x_2 x_4+0.0299 x_1 x_2^2 x_4+0.0011 x_2^3 x_4+0.0181 x_1^2 x_3 x_4+0.0213 x_1 x_2 x_3 x_4+0.0127 x_2^2 x_3 x_4 +0.0618 x_1 x_3^2 x_4+0.012 x_2 x_3^2 x_4+0.0063 x_3^3 x_4+0.2807 x_1^2 x_4^2+0.2065 x_1 x_2 x_4^2+0.0813 x_2^2 x_4^2 +0.1209 x_1 x_3 x_4^2+0.0622 x_2 x_3 x_4^2+0.1801 x_3^2 x_4^2+1.0191 x_1 x_4^3+0.3767 x_2 x_4^3+0.2687 x_3 x_4^3+1.4473 x_4^4; ℬ_O(x,x̂) = -0.0019+0.0002 x_1+0.0004 x_2-0.002 x_3+0.0094 x_4+0.0009 x̂_1+0.0027 x̂_3-0.0036x̂_4 +0.2636 x_1^2-0.0107 x_1 x_2+0.0107 x_2^2+0.0361 x_1 x_3-0.0312 x_2 x_3+0.2944 x_3^2+0.0113 x_1 x_4 +0.1022 x_2 x_4-0.1939 x_3 x_4+0.9 x_4^2-0.5482 x_1 x̂_1+0.0389 x_2 x̂_1-0.0876 x_3 x̂_1+0.1269 x_4 x̂_1 +0.3112 x̂_1^2+0.0002 x_1 x̂_2-0.0005 x_2 x̂_2+0.0009 x_3 x̂_2+0.0045 x_4 x̂_2-0.0005 x̂_1 x̂_2+0.0003 x̂_2^2 -0.0307 x_1 x̂_3+0.0333 x_2 x̂_3-0.59 x_3 x̂_3+0.2022 x_4 x̂_3+0.0843 x̂_1 x̂_3-0.001 x̂_2 x̂_3+0.2991 x̂_3^2 -0.1601 x_1 x̂_4+0.0492 x_2 x̂_4-0.1228 x_3 x̂_4-1.2124 x_4 x̂_4+0.3105 x̂_1 x̂_4-0.0068 x̂_2 x̂_4 +0.1209 x̂_3 x̂_4+1.1352 x̂_4^2; k_1(x) = 0.0014-0.6836 x_1-0.7086 x_2-0.3108 x_3-2.3368 x_4+0.0003 x_1^2+0.0006 x_1 x_2+0.0004 x_2^2 -0.0001 x_1 x_3+0.0003 x_3^2+0.0032 x_1 x_4+0.0039 x_2 x_4+0.0003 x_3 x_4+0.0118 x_4^2; k_2(x) = 0.0027-1.7315 x_1-1.2185 x_2-0.0702 x_3-10.0234 x_4+0.0003 x_1^2+0.0004 x_1 x_2+0.0007 x_2^2 +0.0009 x_2 x_3+0.0001 x_3^2+0.0025 x_1 x_4+0.0023 x_2 x_4-0.0004 x_3 x_4+0.0066 x_4^2; k̂_1(x,x̂) = 0.0264+0.0026 x_1-0.0049 x_2+0.1314 x_3-0.0478 x_4+0.0064 x̂_1-0.0107 x̂_2-0.1446 x̂_3 +0.0901 x̂_4+0.0234 x_1^2-0.0027 x_1 x_2-0.0019 x_2^2+0.0151 x_1 x_3+0.0058 x_2 x_3-0.0173 x_3^2 -0.0323 x_1 x_4-0.0301 x_2 x_4+0.0331 x_3 x_4-0.1815 x_4^2-0.0554 x_1 x̂_1-0.0016 x_2 x̂_1-0.0102 x_3 x̂_1 -0.0054 x_4 x̂_1+0.0289 x̂_1^2-0.0002 x_1 x̂_2-0.0002 x_2 x̂_2+0.0004 x_3 x̂_2-0.0006 x_4 x̂_2+0.0002 x̂_1 x̂_2 -0.0156 x_1 x̂_3-0.005 x_2 x̂_3+0.0328 x_3 x̂_3-0.0306 x_4 x̂_3+0.0122 x̂_1 x̂_3-0.0005 x̂_2 x̂_3-0.016 x̂_3^2 -0.0012 x_1 x̂_4+0.0073 x_2 x̂_4+0.0056 x_3 x̂_4+0.1384 x_4 x̂_4+0.0084 x̂_1 x̂_4+0.0006 x̂_2 x̂_4 -0.0005 x̂_3 x̂_4-0.0611 x̂_4^2; k̂_2(x,x̂) = -0.0036+0.1768 x_1-0.7756 x_2+3.6439 x_3-4.5077 x_4-1.5763 x̂_1+0.0078 x̂_2-3.63 x̂_3 -3.521 x̂_4-0.0001 x_1^2-0.001 x_1 x_2+0.0006 x_2^2-0.0149 x_1 x_3+0.0014 x_2 x_3-0.0103 x_3^2 -0.0041 x_1 x_4+0.008 x_2 x_4-0.0005 x_3 x_4+0.0329 x_4^2+0.0051 x_1 x̂_1+0.0027 x_2 x̂_1+0.0315 x_3 x̂_1 +0.03 x_4 x̂_1-0.0037 x̂_1^2+0.001 x_1 x̂_2+0.0001 x_2 x̂_2+0.0008 x_3 x̂_2+0.0012 x_4 x̂_2-0.0009 x̂_1 x̂_2 +0.0001 x̂_2^2+0.0179 x_1 x̂_3+0.0005 x_2 x̂_3+0.0265 x_3 x̂_3+0.0125 x_4 x̂_3-0.0303 x̂_1 x̂_3-0.0007 x̂_2 x̂_3 -0.0152 x̂_3^2-0.0017 x_1 x̂_4-0.0058 x_2 x̂_4+0.0382 x_3 x̂_4-0.0308 x_4 x̂_4-0.018 x̂_1 x̂_4-0.0006 x̂_2 x̂_4 -0.0413 x̂_3 x̂_4-0.0077 x̂_4^2.
http://arxiv.org/abs/2307.02994v1
20230706135552
Molecular Simulation for Atmospheric Reaction Exploration and Discovery: Non-Equilibrium Dynamics, Roaming and Glycolaldehyde Formation Following Photo-Induced Decomposition of syn-Acetaldehyde Oxide
[ "Meenu Upadhyay", "Kai Töpfer", "Markus Meuwly" ]
physics.chem-ph
[ "physics.chem-ph" ]
The decomposition and chemical dynamics for vibrationally excited syn-CH_3CHOO is followed based on statistically significant numbers of molecular dynamics simulations. Using a neural network-based reactive potential energy surface, transfer learned to the CASPT2 level of theory, the final total kinetic energy release and rotational state distributions of the OH fragment are in quantitative agreement with experiment. In particular the widths of these distributions are sensitive to the experimentally unknown strength of the O–O bond strength, for which values D_e ∈ [22,25] kcal/mol are found. Due to the non-equilibrium nature of the process considered, the energy-dependent rates do not depend appreciably on the O–O scission energy. Roaming dynamics of the OH-photoproduct leads to formation of glycolaldehyde on the picosecond time scale with subsequent decomposition into CH_2OH+HCO. Atomistic simulations with global reactive machine-learned energy functions provide a viable route to quantitatively explore the chemistry and reaction dynamics for atmospheric reactions. August 1, 2023 § INTRODUCTION Chemical processing and the evolution of molecular materials in the atmosphere are primarily driven by photodissociation reactions. Sunlight photo-excites the molecules in the different layers of Earth's atmosphere and triggers chemical decomposition reactions. The photoproducts are then reactants for downstream reactions from which entire reaction networks emerge.<cit.> Within the extensive array of chemical reactions occurring in the biosphere, Criegee intermediates (CIs) are one of the eminent reactive species that have captured particular attention.<cit.> Criegee intermediates are an important class of molecules generated in the atmosphere from ozonolysis of alkenes which proceeds through a 1,3-cycloaddition of ozone across the C=C bond to form a primary ozonide which then decomposes into an energized carbonyl oxide, also known as CI, and one energized carbonyl (aldehyde or ketone).<cit.> The highly energized CIs rapidly undergo either unimolecular decay to hydroxyl radicals<cit.> or collisional stabilization<cit.>. Stabilized CIs can isomerize and decompose into products including the OH radical, or engage in bimolecular reactions with water vapor, SO_2, NO_2 and acids<cit.>. In the laboratory, generation of CIs in the gas phase has been possible from iodinated precursors<cit.> which allowed detailed experimental studies of the photodissociation dynamics of syn-acetaldehyde oxide using laser spectroscopy<cit.> and provided important information on its reactivity and decomposition dynamics, including final state distributions of the OH product. Computationally, a range of methods has been used, including RRKM theory<cit.> and MD simulations from the saddle point separating syn-acetaldehyde oxide and vinyl hydroperoxide (VHP).<cit.> However, it was not until recently that the entire reaction pathway from energized syn-CH_3COOH to OH elimination was followed using neural network-based reactive potential energy surfaces and molecular dynamics simulations.<cit.> The ardent interest in understanding the photodissociation dynamics of energized CIs, and in particular syn-CH_3CHOO, is the fact that one of the decomposition products is the OH radical. The hydroxyl radical, also referred to as the “detergent of the troposphere”,<cit.> is one of the most powerful oxidizing agents and plays an important role in the chemical evolution of the atmosphere, triggering the degradation of many pollutants including volatile organic compounds (VOCs).<cit.> Field studies have suggested that ozonolysis of alkenes is responsible for the production of about one third of the atmospheric OH during daytime, and is the predominant source of hydroxyl radicals at night.<cit.> In addition to OH elimination, a second reaction pathway may lead to glycolaldehyde (GA). Using molecular-beam mass spectrometry, the formation of glycolaldehyde during the ozonolysis of trans-2-butene was observed.<cit.> This is the only experimental work to our knowledge that reported glycolaldehyde formation from the syn-CH_3CHOO Criegee intermediate. Glycolaldehyde, an atmospheric volatile organic compound can also be generated from isoprene<cit.>, ethene<cit.> and biomass burning<cit.>. The present work introduces atomistic simulations based on validated machine-learned potential energy surfaces as a means for reaction discovery for photodissociation processes. Such an approach includes nonequilibrium, enthalpic, and entropic contributions to each of the reaction channels included in the computational model. For an overview of all species considered in the present work with respective relative energies, see Figure <ref>. The combined use of experimental and computed observables allows determination of the elusive O–O scission energy for OH liberation following the initial preparation of the reactant. For this, the final state total kinetic energy release (TKER) and rotational distributions of the OH fragment are determined from a statistically significant number of full-dimensional, reactive molecular dynamics simulations, starting from internally cold, vibrationally excited syn-CH_3CHOO with two quanta in the CH stretch normal mode, akin to the experimental procedures.<cit.> The global potential energy surfaces are either a neural network (NN) representation based on the PhysNet architecture,<cit.> or described at a more empirical level, using multi state adiabatic reactive MD (MS-ARMD).<cit.> Unlike MS-ARMD, with PESs based on PhysNet chemical bonds can break and form akin to ab initio MD simulations. This allows exploration of alternative reaction pathways one of which is the formation of glycolaldehyde. The formation and decomposition dynamics of GA is also studied in the present work and expands on our view regarding the fate of activated “simple” molecules in the atmosphere. § RESULTS §.§ Final State Distributions for the OH Fragment When considering computer-based tools for exploration of the dynamics and chemical development of molecules, it is imperative to validate the models used. Based on the MD simulations with the two PESs (see Figure <ref>) first the total kinetic energy release (TKER) to CH_2CHO + OH products was analyzed. In trajectories leading to OH as the final product, two different types of OH elimination pathways were observed. The first is referred to as “direct elimination” whereas the second is classified as “roaming elimination”. A qualitative criterion based on the distance traveled by the OH fragment was used to distinguish the two types of processes. For this, the moment at which the O–O separation reaches 3 Å is set as the zero of time. Then, the time for elimination (t_e) required for the O–O distance to increase from 3 Å to 10 Å was recorded. For t_ e < 0.8 ps a trajectory was identified as “direct elimination” whereas for all other cases (t_ e≥ 0.8 ps) it is a “roaming elimination”, see SI and Figures S2 to S4 for motivating this choice for t_ e. Figure <ref> compares TKER distributions from simulations with CH-excitation at 5988 cm^-1 (because experimentally P(N) was reported at this excitation energy, see below) using different PESs with experimentally determined P( TKER) at a CH-excitation energy of 6081 cm^-1 forming OH(X^2 Π, v=0, N=3). For reference, from a 6-point moving average the position of the experimentally measured maximum is found at 542 cm^-1. Figure <ref>A reports distributions obtained from using the MS-ARMD PES for D_e = 22, 25, 27 kcal/mol (blue, red, green traces). The computed P( TKER) qualitatively correctly capture the experimentally determined distribution but are somewhat narrower with a peak at ∼ 500 cm^-1 shifted to lower energy by ∼ 40 cm^-1 compared with experiments. Additional exploratory simulations were also carried out for a CH-excitation energy of 6081 cm^-1 using MS-ARMD and no significant change in the peak position was observed. On the other hand, P( TKER) to yield OH(X^2 Π, v=0, all N) for “direct elimination” from simulations using the PhysNet PES with two different dissociation energies for O–O scission agree well with experiment, see Figures <ref>B and C. the agreement is even better if only the subpopulation forming OH(X^2 Π, v=0, N=3) is compared with experiment, see Figure S5. In Figures <ref>B (D_e = 22 kcal/mol) and <ref>C (D_e = 25 kcal/mol) reports P( TKER) from “direct elimination” (dashed), “roaming elimination” (dotted-dashed) and their sum (solid). With D_e = 22 kcal/mol the peak position for “direct elimination” is at 538 cm^-1 - in almost quantitative agreement with experiment - which shifts to 290 cm^-1 if all trajectories are analyzed. Most notably, the total width of P( TKER) realistically captures the distribution measured from experiments, including undulations at higher energies. For an O–O scission energy of D_e = 25 kcal/mol the maximum for directly dissociating trajectories is at 531 cm^-1, compared with 286 cm^-1 if all trajectories are analyzed. The small shift of ≈ 5 cm^-1 in the peak position on increasing the barrier by 3 kcal/mol shows that the TKER does not depend appreciably on the barrier for O-O dissociation. This is because independent of the precise value of D_e, the van der Waals complex in the product channel (CH_2CHO—HO) is always stabilized by 5 kcal/mol (structures E and F in Figure <ref>) relative to the completely separated CH_2CHO + OH fragments. On the other hand, the width of P( TKER) responds directly to changes in D_e - with increasing dissociation energy, the width of the distribution decreases appreciably. The observation that the sum of “direct elimination” and “roaming trajectories” yields a peak shifted to lower energy and less favourable agreement with experiment indicates that the amount of roaming trajectories (∼ 40 %) may be overestimated in the present simulations. Recent experiments and calculations on the formation of 1-hydroxy-2-butanone from anti-methyl-ethyl substituted Criegee intermediate found ≤ 10 % contribution of a roaming pathway.<cit.> This overestimation may be due limitations in the CASPT2/cc-pVDZ calculations to accurately describe long range interactions to yield an overproportion of “roaming trajectories”. Compared to simulations using the PhysNet PES, the MS-ARMD PES allows qualitatively correct studies but can, e.g., not describe the “roaming" part of the dynamics. Furthermore, quantitative studies require more detailed information about the angular/radial coupling and the electrostatic interactions as afforded by the PhysNet model based on CASPT2 calculations. Another distinguishing feature between the MS-ARMD and PhysNet PESs is that the NN-trained model features fluctuating partial charges depending on the internal degrees of freedom of the species involved. From a dynamics perspective, trajectories based on PhysNet convincingly demonstrate that OH-elimination proceeds either through a “direct" or a “roaming-type" mechanism. Hence, the added effort in constructing the PhysNet PES and the considerably increased compute time incurred compared with the computationally efficient MS-ARMD simulations allow to move from qualitative to quantitative results, both in terms of P( TKER) and the mechanistic underpinnings derived from analysis of the trajectories. Final OH-rotational distributions following CH-excitation at 5988 cm^-1 are shown in Figures <ref>A (MS-ARMD) and B/C (PhysNet with different values for D_e). The experimental P(N) distribution (grey symbols), also reported from excitation with 5988 cm^-1, peaks at N = 3 although the peak is broad and may also be considered to extend across 3 ≤ N ≤ 6. This compares with N_ max = 4 from the simulations on the MS-ARMD PESs with D_e = 22, 25, and 27 kcal/mol (solid blue, red, green circles), see Figure <ref>A. The computed P(N) follow a gamma-distribution (solid lines) which represent waiting time distributions<cit.> and were successfully used to model photodissociation of H_2SO_4 following vibrational excitation of the OH-stretch.<cit.> With increasing D_e the N_ max shifts to lower values: for D_e = 22 kcal/mol P(N) peaks at N= 4 and decreases to N= 3 for D_e = 25 and 27 kcal/mol. For all trajectories that were analyzed, the final vibrational energy of the OH product was always close to its vibrational ground state, i.e. ν_ OH = 0. Results from simulations using PhysNet with D_e = 22 and 25 kcal/mol in Figures <ref>B and C report P(N) for OH. Both distributions realistically describe the experimentally reported P(N), in particular the plateau between 3 ≤ N ≤ 6 - and for D_e = 25 kcal/mol also the width is captured. For D_e = 22 kcal/mol the width of P(N) is somewhat larger than that from experiments. With increasing dissociation energy the maximum for P(N) shifts from N_ max = 5 to N_ max = 4. The P(N) for “direct" and “roaming" type dissociation are shown in Figure S6. The final N-state of the photodissociating OH-product is to some extent correlated with the average O-O-H angle at the moment of dissociation which is assumed to be at an O–O separation of 3 Å. For final OH-rotational states N < 6 the average O-O-H angle decreases from 140^∘ to 105^∘ whereas for N ≥ 6 it remains at ∼ 100^∘, see Figure S7. Previous computational studies<cit.> reported final state distributions from trajectories that were initiated from either the H-transfer TS or the submerged saddle point configuration immediately preceding the OH elimination step D, see Figure <ref>, i.e. from simulations that did not follow the entire dynamics between syn-CH_3COOH to OH-elimination. Simulations starting at the H-transfer TS yielded total kinetic energy release distributions P( TKER) for OH(X^2 Π, v=0, all N) with the maximum peak shifted to considerably higher energy (∼ 1500 cm^-1 compared with ∼ 540 cm^-1 from experiments) and a width larger by a factor of ∼ 2 compared with those observed, whereas starting trajectories at the submerged barrier shifted the maximum to lower kinetic energy (∼ 800 cm^-1). For the OH-rotational distributions P(N), it was found that starting the dynamics at the H-transfer transition state yields a width and position of the maximum in accordance with experiments whereas initiating the dynamics at the submerged saddle point shifts the maximum to smaller N_ max and leads to narrower P(N). Taken together, starting the dynamics at intermediate yields qualitatively correct final state distributions which can, however, differ by up to a factor of 3 for position of the peak maximum or width from the experimentally measured ones. Taken together, the present results for P( TKER) and P(N) suggest that PhysNet transfer-learned to the CASPT2 level and used in the hybrid simulations with D_e = 22 kcal/mol for the O–O bond yields final state distributions consistent with experiment. With increasing D_e from 22 to 25 kcal/mol both final state distributions show decreasing probabilities for higher kinetic energies and rotational states when considering OH(X^2 Π, v=0, all N). If the final rotational state of the OH-photoproduct is limited to N=3 and compared with experiment (see Figure S5), both values of D_e yield favourable agreement with experiment, in particular for “directly dissociating” trajectories, whereas for D_e = 25 kcal/mol P( TKER) from “all trajectories” (solid red line) yields good agreement as well. Furthermore, the simulations provide information about the N-dependence of P( TKER), see Figure S8, which indicates that for OH(v=0, N ∈ [0,3]) the distribution is somewhat more narrow up to TKER∼ 1000 cm^-1 with more pronounced undulations compared with P( TKER) for OH(v=0, N ∈ [9,12]). This is information complementary to what is available from experiments. The MS-ARMD simulations yield P( TKER) that are somewhat too narrow but P(N) with D_e = 25 kcal/mol also realistically describes the experimentally reported distributions. These findings support the present and previous CASPT2 calculations that found and reported dissociation energies of up to 26 kcal/mol and clarify that 31.5 (6-31G(d)) and 35.7 (aug-cc-pVTZ) kcal/mol from MP2 calculations suffer from assuming a single-reference character of the electronic wavefunction.<cit.> §.§ Thermal Rates With the new PhysNet PES transfer-learned to the CASPT2 level, energy-dependent rates were determined for 3 excitation energies i.e. [5603, 5818, 5988] cm^-1. The stretched exponential with stretch exponents ranging from 1.4 to 1.9 remarkably well captures the behavior of N(t). Such values suggest that kinetic intermediates are visited that lead to distributed barriers,<cit.> which is also consistent with the shape of P(N) which are Gamma-distributions. The thermal rates for the two values of D_e and from using the two fitting functions are reported in Figure <ref> together with the experimentally determined rates. The moderate energy dependence of k(E) is correctly captured by the simulations but - depending on whether a single- or a stretched-exponential is employed - the absolute values of the rates are too large by one to two orders of magnitude, respectively. Within equilibrium transition state theory this reflects a barrier along the pathway that is overestimated by 2 to 3 kcal/mol. This point is discussed further below. §.§ Dissociating Trajectories, Roaming and Glycolaldehyde Formation Following the dynamics of OH after dissociation revealed that the system accesses final states different from CH_2CHO+OH. As one alternative, roaming of OH leads to recombination with the CH_2 group to yield glycolaldehyde which is stabilized by 66.5 kcal/mol relative to VHP, see Figure <ref>. This intermediate can decay to form new products such as CH_2OH+HCO or CH_2OHCO+H, see Figures <ref> and <ref>. An overview of the decay channels observed from the simulations using the PhysNet PES following formation of internally hot VHP is shown in Figure <ref>. It is useful to recall that the reaction is initiated through excitation of the CH-stretch mode with approximately 2 quanta which makes elimination of OH from VHP a nonequilibrium process because vibrational energy redistribution is not complete on the lifetime of VHP (∼ 20 ps, see Figure S10). For each pathway and product the percentages of the final states on the 200 ps time scale are reported which were determined from 6000 independent trajectories with D_e = 22 kcal/mol and excitation of the CH-stretch at 5988 cm^-1. Out of all the trajectories run (excluding those with OH-vibrational energies lower than the quantum-mechanical zero-point energy), 24 % lead to direct elimination of OH and 41 % roam around CH_2CHO out of which 20 % (i.e. 7 % of the total) lead to OH elimination and 80 % (i.e. 34 % of total) form GA. The remaining 29 % yield GA in a direct fashion and 6 % remain in VHP on the 200 ps time scale but may undergo further chemical processing on longer time scales. From the total GA population formed (63 % of all trajectories), 7 % decay to CH_2OH + HCO, 6 % end up in CH_2OHCO+H and the remainder in GA. It should be noted that branching fractions depend on the total simulation time considered. For example, it is expected that all of GA formed in the gas phase will eventually decay on sufficiently long time scales because sufficient energy is available. Explicit time series for important bonds involved in formation of GA are reported in Figure <ref>. Starting from syn-CH_3COOH at (t=0), vibrational excitation of the CH-stretch leads to formation of VHP after 36 ps. The system dwells in this state for ∼ 15 ps until OH elimination and transfer to the CH_2 group leads to GA with a residence time of ∼ 30 ps by which HCO elimination occurs at 81 ps. The trajectory shown involves direct OH-transfer between VHP and GA, without OH-roaming. Figure <ref>A shows an example of OH dynamics around the vinoxy radical before glycolaldehyde formation. This “attack from the backside” is a hallmark of a roaming reaction. Probability densities for the OH radical moving around CH_2CHO before glycolaldehyde formation using 500 independent trajectories from PhysNet PES with D_e = 22 kcal/mol are shown in Figures <ref>B/C. In both “direct transfer" (Figure <ref>B) and “roaming" (Figure <ref>C) trajectories, the OH radical moves out-of-plane to attack the -CH_2 group. A similar out-of-plane roaming was reported from experiments on formaldehyde.<cit.> In direct transfer trajectories, OH follows a semicircular path around the vinoxy group whereas pronounced roaming is followed for the other trajectories, see Figure <ref>C. Formation times of GA after dissociation of OH from VHP extend up to ∼ 2 ps, see Figure S11. The initial, pronounced peak at 0.1 ps is due to “direct transfer” between VHP and GA whereas the remainder of the exponentially decaying distribution involves OH-roaming trajectories with maximum diffusion times of 2.6 ps. Out of the total amount of GA formed, 11 % (7 % of total VHP) show C-C bond cleavage to form CH_2OH + HCO radicals and the lifetime distribution of GA before dissociation is shown in Figure S12. The distribution has lifetimes of up to ∼ 10 ps with a most probable lifetime around ∼ 1 ps. Two-dimensional projections of the OH positions around the CH_2CHO radical as shown in Figure <ref> provide an impression of the spatial range sampled by the OH radical before formation of GA. Typical excursions involve separations of 3 to 4 Å away from the center of the C-C bond of CH_2CHO. § DISCUSSION AND CONCLUSION The present work establishes that photodissociation of the energized syn-acetaldehyde oxide intermediate can lead to additional products beyond the known CH_2CHO+OH fragments. This is possible because the OH fragment following dissociation of VHP can roam around the CH_2CHO radical and lead to glycolaldehyde formation from which other dissociation channels open up, e.g. leading to the CH_2OH+HCO or CH_2OHCO+H fragments. This discovery was possible because NN-trained PESs allow bond-breaking/bond-formation to occur akin to ab initio MD simulations but allowing for exploration of statistically significant numbers of simulations to be run. Two strategies were used in the present work to refine and improve the PESs<cit.>. The first one was based on adjusting the dissociation energy for OH formation in the MS-ARMD PES, akin to “morphing”,<cit.> which is particularly simple due to the empirical nature of the parametrization. The second approach was to use multi-reference-based training energies (at the CASPT2 level) to transfer learn the reactive PES based on the PhysNet architecture. A multi-reference treatment is required to realistically describe the energetics for VHP, its dissociation to CH_2CHO+OH and reformation and decay of GA. Both approaches adapt the shape of a global PES based on local information and attempt to preserve the overall shape of a PES by fine-tuning features to which the available observables are sensitive.<cit.> The product state distributions for fragment kinetic energy and rotational excitation of the OH product are in almost quantitative agreement with experiment, in particular for D_e = 22 kcal/mol and the transfer-learned PhysNet PES. This differs from earlier efforts<cit.> which initiated the dynamics at transition states and found only qualitative agreement, in particular for P( TKER). Even the more empirical, MS-ARMD representation yields reasonable P( TKER) and P(N) given the considerably decreased parametrization effort compared with TL-based PhysNet. On the other hand, the roaming pathway can not be described realistically using the present MS-ARMD model. The rates following overtone excitation of the CH-stretch show the correct energy dependence compared with experiment but are larger by one to two orders of magnitude which is attributed to the higher barrier for H-transfer in the first step between syn-CH_3COOH and VHP. For the O–O scission energy the present work suggests a best estimate of D_e = 22 kcal/mol which compares with ∼ 25 kcal/mol from electronic structure calculations at the CASPT2 level of theory.<cit.> The undulations superimposed on P( TKER) are spaced by ∼ 1000 cm^-1 which can be potentially related to the C-C stretch mode in CH_2CHO which is reported at 917 cm^-1.<cit.> Such an assignment is further corroborated by separately analyzing the C-C distribution functions for ∼ 100 trajectories with the lowest- and highest-TKER from Figure <ref>B, respectively. For large kinetic energy release, less energy is available to distribute over the internal degrees of freedom which leads to a smaller average C-C separation and a more narrow distribution, whereas for low-TKER more energy is available which shifts the average C-C separation to larger values and widens up the distribution, see Figure S13. OH-roaming times until recombination to GA occurs are in the range of picoseconds, consistent with earlier reports on roaming in nitromethane.<cit.> Glycolaldehyde formation from photodissociation of syn-CH_3COOH is therefore another case for “molecular roamers" (here OH).<cit.> As was reported earlier, OH elimination at atmospheric conditions takes place on longer (μs) time scales.<cit.> Because OH-roaming to form GA occurs on considerably shorter (ps) time scales, the processes considered in the present work are also likely to be relevant to the chemical evolution of the atmosphere, depending on the environmental conditions (local density), and will lead to a rich and largely unexplored chemistry. A notable finding of the present work is that the final distributions for translational kinetic energy and rotational quantum numbers of the OH fragment are sensitive to the O–O dissociation energy whereas the energy-dependent rates k(E) calculated using stretched exponential function are not. Assuming k(E) to follow an Arrhenius-type expression at T=300 K would lead to an expected increase in the rate by a factor of ∼ 150, i.e. by approximately two orders of magnitude, upon reducing the dissociation energy by 3 kcal/mol. This is evidently not what is found from the present simulations which rather lead to a change in the rate by a factor of 2 to 5, see Figure <ref>. Therefore, elimination of OH from VHP is a nonequilibrium process because the reaction is initiated through vibrational excitation of the CH-stretch and vibrational energy is only equilibrated across all degrees of freedom on the time scale of the reaction, and k(E) is not particularly sensitive to the O–O dissociation energy. This also suggests that the entire pathway from the initial reactant (here syn-CH_3COOH) to the products (here: CH_2CHO+OH, CH_2OHCO+H, and CH_2OH+HCO) needs to be followed in order to realistically (re)distribute internal and relative energies in the species involved. Based on the agreement for the final state distributions, D_e ∈ [22,25] kcal/mol is the recommended range from the present work. With yet larger values for the O–O scission energy (e.g. D_e^ OO = 28 kcal/mol as in Figure S14) in particular P( TKER) deviate appreciably from the observations. Hence, a combination of experimental results and advanced computational modeling based on statistically significant numbers of trajectories using a PES based on CASPT2 calculations allows to approximately determine an essential quantity - the O–O scission energy - for atmospheric modeling. This is comparable to estimating the dissociation energy for HO_3 from energy-dependent rates and statistical mechanical modeling,<cit.> however, without assuming thermal equilibrium in the present case. It is further noted that the amount of roaming may be overestimated in the present work as seen for P( TKER) in Figure <ref> which can be a consequence of the rather small basis set used in the CASPT2 calculations and details in the long range interactions that can be improved further. The current work also clarifies that a wide range of final products are potentially accessible from a single initial reactant such as syn-CH_3COOH. Combining the methods used in the present work with techniques to enumerate all possible fragmentation products of a molecule,<cit.> a comprehensive and quantitative exploration of the decomposition dynamics of important atmospheric molecules is in reach. The reactive radicals generated in such processes engage in various bimolecular reactions, yielding outcomes that have potentially significant consequences for atmospheric modeling. The present study uncovers probably only a small portion of the extensive possibilities in reactive intermediates and final decomposition products of CIs accessible under atmospheric conditions. It is likely that the findings of the present work are also relevant for decomposition reactions for compounds other than CIs. With recent advances to generate accurate, high-dimensional reactive PESs<cit.> paired with transfer-learning strategies<cit.> and integration into broadly applicable molecular simulation software,<cit.> routine exploration and discovery of the decomposition dynamics of small to medium-sized molecules becomes possible. § METHODS §.§ Reactive Potential Energy Surfaces From previous work, reactive PESs are available as multi state-adiabatic reactive molecular dynamics (MS-ARMD) and neural network (PhysNet) representations<cit.> which both employed reference calculations at the MP2 level of theory. The MS-ARMD PES fit to MP2/6-31G(d) reference data correctly describes the barrier for H-transfer with a barrier height of 16.0 kcal/mol, consistent with experimental estimations<cit.> but the dissociation energy for OH-elimination from VHP, for which no experimental data is available, is considerably larger (31.5 kcal/mol) compared with estimates between 18 and 26 kcal/mol from previous multi-reference calculations.<cit.> Consequently, the O–O scission energy in MS-ARMD was empirically adjusted to match estimates of around 25 kcal/mol. Due to the limitations of the MP2 reference calculations the PhysNet model also suffers for the OH elimination step. Analysis of the two available PESs motivated the following specific improvements. The PhysNet PES is transfer learned<cit.> to the CASPT2(12,10)/cc-pVDZ level of theory based on reference energies, forces and dipole moments for 26,500 structures computed by MOLPRO 2019.<cit.> The reference data set contains 16000 structures randomly chosen between CH_3COOH and OH elimination from the MP2 data set and additional 10500 OH roaming, glycolaldehyde and CH_2OH + CHO structures. The dissociation barrier for the O–O bond at the CASPT2(12,10)/cc-pVDZ level of theory starting from the VHP equilibrium to a separated vinoxy and OH radical complex is determined with ∼ 22 kcal/mol and is consistent with earlier work.<cit.> The quality of the transfer-learned PES is shown in Figure <ref>. For the MS-ARMD PES the O–O scission energies considered were D_e = 22, 25, 27 kcal/mol and the charges on the separating OH fragment were reduced to q_ O = -0.1e , q_ H = 0.1e such that the PES correctly dissociates for large O–O separations. In order to determine the sensitivity of the results from the MD simulations on the O–O dissociation energy, the PhysNet model was modified to increase the scission energy to D_e ∼ 25 kcal/mol. The modification adds an energy term f(r_ OO)= Δ V_s/1 + e^-b(r_ OO-r_e) to the transfer-learned PhysNet representation to increase the dissociation energy by Δ V_s. The sigmoidal function depends on the O–O separation r_ OO and the parameters Δ V_s = 3 kcal/mol, b = 14.0 Å^-1, r_e = 1.95 Å increase the dissociation energy and influence the slope and the turning point of the curve, respectively. For syn-CH_3COOH and VHP conformations f(r_ OO) ∼ 0 which increases to Δ V_s = 3 kcal/mol as the OH radical dissociates. §.§ Molecular Dynamics Simulations Molecular dynamics (MD) simulations using the MS-ARMD PES were carried out with a suitably extended c47a2 version of the CHARMM program<cit.> whereas simulations using the PhysNet PES were carried out with the pyCHARMM API linked to c47a2 to pipe the NN-energies and forces into the MD code.<cit.> All production MD simulations, 200 ps in length for each run, were carried out in the NVE ensemble with a time step of Δ t = 0.1 fs to conserve total energy. For each variant of the PESs (MS-ARMD and PhysNet with corresponding values for D_e^ OO) at least 6000 independent trajectories were run. Initial conditions were generated from NVT simulations at 50 K from which structures and velocities were extracted at regular intervals. Subsequently, the velocities of the atoms involved in the C–H closest to the accepting oxygen O_ B were excited by ∼ 2 quanta along the bond vector equal to excitation energies ranging from 5603 to 6082 cm^-1, consistent with experiment.<cit.> To emulate the cold conditions encountered in the experiments the velocities of all remaining atoms were set to zero. Simulations based on the MS-ARMD representation used this PES throughout the entire simulation. With the PhysNet representation the simulations used a hybrid protocol. For the H-transfer to form VHP from syn-CH_3COOH the MS-ARMD representation featuring the correct 16.0 kcal/mol barrier was used. Then, the positions x⃗ and momenta p⃗ of all atoms were stored and the simulation was restarted using the PhysNet representation. For consistency, a scaling λ for the momenta p⃗ of all atoms was determined from T(p⃗) + Δ V_MS-ARMD(x⃗) = T(λ·p⃗) + Δ V_PhysNet(x⃗) to match the sum of the kinetic energy T and the potential energy difference Δ V between the potential at atom positions x⃗ and the VHP equilibrium structure for both PESs. §.§ Final State Analysis and Rates For the final state analysis, the total energy of the separated system was decomposed into fragment translational (E_ trans), rotational (E_ rot), and vibrational (E_ vib) energy components. The experimentally measured total kinetic energy release (TKER) is the sum of the OH and vinoxy radical fragments translational energy contributions E_ trans,α = p⃗_ CM,α^ 2/2M_α where M_α is the mass and p⃗_ CM,α the center of mass momentum of fragment α obtained as the sum of the respective atom momenta. The rotational energy E_ rot of the dissociating OH radical is E_ rot = L^2/2I with the angular momentum vector L⃗ = ∑_i=1^2 (r⃗_i - r⃗_ CM) ×p⃗_i, where r⃗_i - r⃗_ CM is the atom position with respect to the center of mass position, p⃗_i are the atoms momenta and I is the OH moment of inertia. As the exchange of energy between vibrational and rotational degrees of freedom persists in the presence of non-zero internal angular momentum, E_ rot is averaged over the final 25 ps after dissociation. The TKER and average E_ rot become constant after both fragments have moved further than the interaction cutoff of 6 Å from each other. The vibrational energy of the OH fragment was then computed according to E_ vib = E_ OH - E_ trans - E_ rot with E_ OH = T + V(r_ OH) which was invariably close to the quantum mechanical ground state vibrational energy for the respective rotational state of OH. Hence, final OH products are always in ν_ OH = 0. Trajectories (30 %) with OH-vibrational energies lower than the quantum-mechanical zero-point energy at the respective rotational state were excluded from the analysis. To determine reaction rates, for each energy N_ tot = 6000 individual simulations were carried out. The rates were determined from fitting the number N(t) of trajectories that had not reacted by time t to single (∼exp(-kt)) and stretched exponential decays (1-d) exp(-kt)^γ + d), see Figure S9 for the quality of the two fits. § DATA AVAILABILITY The reference data and PhysNet codes that allow to reproduce the findings of this study are openly available at <https://github.com/MMunibas/Criegee-CASPT2 > and <https://github.com/MMunibas/PhysNet>. § ACKNOWLEDGMENT This work was partially supported by the Swiss National Science Foundation through grants 200021-188724, the NCCR MUST (to MM), and the University of Basel which is acknowledged. We also thank Prof. Michael N. R. Ashfold for valuable discussions.
http://arxiv.org/abs/2307.00569v1
20230702133636
SSP: Self-Supervised Post-training for Conversational Search
[ "Quan Tu", "Shen Gao", "Xiaolong Wu", "Zhao Cao", "Ji-Rong Wen", "Rui Yan" ]
cs.CL
[ "cs.CL" ]
Filter Bubbles in Recommender Systems: Fact or Fallacy - A Systematic Review Qazi Mohammad Areeb1, Mohammad Nadeem2, Shahab Saquib Sohail3, Raza Imam1, Faiyaz Doctor45, Yassine Himeur5, Amir Hussain6, and Abbes Amira78 1 Mohamed bin Zayed University of Artificial Intelligence, computer vision MBZUAI Abu Dhab Masdar City, Abu Dhabi 2Department of Computer Science, Aligarh Muslim University, Aligarh, 202002, India 3Department of Computer Science and Engineering, Jamia Hamdard University, New Delhi, 110062, India 5School of Computer Science and Electronic Engineering, University of Essex, Wivenhoe Park, Colchester CO4 3SQ, United Kingdom 4Edinburgh Napier University, United Kingdom 5College of Engineering and Information Technology, University of Dubai, Dubai, UAE 6Edinburgh Napier University, United Kingdom 7Department of Computer Science, University of Sharjah, Sharjah, United Arab Emirates 8Institute of Artificial Intelligence, De Montfort University, Leicester, United Kingdom August 1, 2023 ====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Conversational search has been regarded as the next-generation search paradigm. Constrained by data scarcity, most existing methods distill the well-trained ad-hoc retriever to the conversational retriever. However, these methods, which usually initialize parameters by query reformulation to discover contextualized dependency, have trouble in understanding the dialogue structure information and struggle with contextual semantic vanishing. In this paper, we propose () which is a new post-training paradigm with three self-supervised tasks to efficiently initialize the conversational search model to enhance the dialogue structure and contextual semantic understanding. Furthermore, the can be plugged into most of the existing conversational models to boost their performance. To verify the effectiveness of our proposed method, we apply the conversational encoder post-trained by on the conversational search task using two benchmark datasets: CAsT-19 and CAsT-20. Extensive experiments that our can boost the performance of several existing conversational search methods. Our source code is available at <https://github.com/morecry/SSP>. § INTRODUCTION The past years have witnessed the fast progress of the ad-hoc search<cit.>. However, when it confronts more complicated information needs, the traditional ad-hoc search seems to be less competent. Recently, researchers proposes conversational search which is the combination of the search engine and the conversational assistant <cit.>. Different from the keyword-based query in the ad-hoc search, multi-turn natural language utterance is the main interactive form in the conversational search. This yields the challenge of developing the conversational search system that existing ad-hoc retrievers and datasets cannot be directly used to derive the conversational query understanding module. In the beginning, researchers reformulate a conversational query to a de-contextual query, which is used to perform ad-hoc retrieval <cit.>. Recently the conversational dense retrieval model <cit.> is presented to directly encode the whole multi-turn conversational context as a vector representation and conduct matching with the candidate document representations. Since the real-world conversational search corpus is hard to collect, a warm-up step is additionally employed to initialize the conversational representation ability <cit.>. These conversational dense retrieval methods have achieved significantly better performance than the query reformulation methods and have been widely adopted in research of conversational search <cit.>. However, these warm-up methods just use the same training objective on a large dataset from other domains to initialize the parameters of the conversational encoder, which can hardly capture the structure information of the conversation which is essential for understanding the user's search intent accurately. In this paper, we propose () for the conversational search task as shown in Figure <ref>. In , we replace the commonly used warm-up step with a new post-training paradigm which contains three novel self-supervised tasks to learn how to capture the structure information and keep contextual semantics. To be more specific, the first self-supervised task is topic segmentation, which learns to decompose the dialogue structure into several segments based on the topic. To tackle the coreference problem which is a ubiquitous problem of multi-turn conversation modeling, we propose the coreference identification task which helps the model identify the most possible referred terms in the context and simplifies the intricate dialogue structure. Since understanding and remembering the semantic information in the conversational context is vital for conversational context modeling, we propose the word reconstruction task which prevents contextual semantic vanishing. To demonstrate the effectiveness of , we first equip several existing conversational search methods with and conduct experiments on two benchmark datasets: CAsT-19 <cit.> and CAsT-20 <cit.>. Experimental results demonstrate that the outperforms all the strong baselines on 2 datasets. To sum up, our contributions can be summarized as follows: ∙ We propose a general and extensible post-training framework to better initialize the conversational context encoder in the existing conversational search models. ∙ We propose three specific self-supervised tasks which help the model to capture the conversational structure information and prevent the contextual semantics from vanishing. ∙ Experiments show that our can boost the performance of strong conversational search methods on two benchmark datasets and achieves state-of-the-art performance. § RELATED WORK Conversational search has become a hop research topic in recent years. TREC Conversational Assistant Track (CAsT) competition <cit.>, which holds the benchmark largely promotes the progress of conversational search. In the beginning, researchers simply view conversational search as the query reformulation problem. They suppose that if a context-dependent query could be rewritten to a de-contextualized query based on historical queries, then it directly uses the well-trained ad-hoc retriever to obtain retrieval results. Transformer++ <cit.> fine-tunes the GPT-2 on query reformulation dataset CANARD <cit.> to rewrite query. QueryRewriter <cit.> exploits large amounts of ad-hoc search sessions to build a weak-supervision query reformulation data generator, then these automatically generated data is used to fine-tune the language model. However, these methods underestimate the value of context, which contains various latent search intentions and topic information. After that, the conversational dense retriever is proposed. It straightly encodes full conversation whose last query denotes the user's real search intention to dense representation. ConvDR <cit.> forces the contextual representation to mimic the reformulation query representation based on the teacher-student framework, which slightly deals with the conversational search data scarcity problem. Further, COTED <cit.> points out that not all queries in context are useful and devises a curriculum denoising method to inhibit the influence of unnecessary contextual queries. These dense methods additionally perform the warm-up on the other domain dataset to initialize the parameters based on their own objective. However, their warm-up ignore the conversation structure information, which is crucial for capturing the relationship between utterances and understanding the search intention of the user. In this respect, we devise a novel () to replace the warm-up as Figure <ref>. § PROBLEM FORMULATION We assume that there is a multi-turn search conversation Q={q_1, q_2, …, q_n}, where q_i={x_i,1, x_i,2, …, x_i,l_i} represents the i-th question in the conversation and x_i,j is the j-th token in q_i. The last query q_n is the user's real search intention. We insert special tokens [𝙲𝙻𝚂] and [𝚂𝙴𝙿] in Q yielding {𝙲𝙻𝚂, q_1, [𝚂𝙴𝙿], q_2, [𝚂𝙴𝙿], …, [𝚂𝙴𝙿], q_n} as the model input, where [𝙲𝙻𝚂] is the start token and [𝚂𝙴𝙿] is the separation token to split each query. After the concatenation of all queries is sent into the conversational encoder (a transformer-based architecture model), we obtain the last layer's output hidden state E. E_[𝙲𝙻𝚂] and E_[𝚂𝙴𝙿] are the corresponding representations of [𝙲𝙻𝚂] and [𝚂𝙴𝙿] and will be used in self-supervised tasks. Our goal is to learn a better contextual representation E_[𝙲𝙻𝚂] in order to accurately retrieve documents in corpus for the last query q_n. § SELF-SUPERVISED POST-TRAINING §.§ Overview In this section, we propose our , abbreviated as . An overview of is shown in Figure <ref>, which consists of three self-supervised tasks: ∙ Topic Segmentation Task aims to find the topic-shifting point in the utterances. It helps the model to capture the topic structure in the conversational context. ∙ Coreference Identification Task aims to identify the correlation structure between two referred utterances, which helps the conversational encoder to understand the coreference relationship and produce better query representation. ∙ Word Reconstruction Task aims to reconstruct the bag-of-word (BOW) vector of the conversational context using the conversational vector representation. It helps the model avoid the contextual semantic vanishing during conversation encoding. After jointly training the conversational encoder using these three self-supervised tasks, we fine-tune the encoder to the conversational search downstream task using the existing conversational search methods. §.§ Topic Segmentation Task When the user interacts with the conversational search system, the focused topic may vary from time to time. Taking the example in Figure <ref>, the search intention of the user changes according to the retrieval results of previous turns. This causes the topic of the conversation to shift. Since the conversation topic may shift in every utterance, to fully understand a user query, the conversational system should know what is the current topic of this query and view the utterances of the current topic as a more salient context. If the conversational encoder cannot identify the topic boundary of the current topic, it may focus on unrelated utterances and incorporate noise information into the query representation. Thus we propose the topic segmentation task to identify the topic boundary of the conversation, which can facilitate the model to focus on more related context when encoding the query. We first randomly sample a noise conversational session with several utterances from the training corpus and then concatenate this sampled noise session at the beginning of the raw conversational context. Given the raw search conversation Q={q_1, q_2, …, q_n} and the noisy conversation Q^'={q^'_1, q^'_2, …, q^'_m}, we truncate the first k queries of Q^' where k is sampled based on reciprocal probability distribution p, which avoids the distortion of the raw context from the abundant long noisy sessions, p_k = 1/k / ∑_i=1^m1/i, k= 1, 2, …, m. After concatenating the sampled noise session before the raw context and separating each query by [𝚂𝙴𝙿], we obtain the perturbed conversation Q̌={[𝙲𝙻𝚂], q^'_1, [𝚂𝙴𝙿], …, q^'_k, [𝚂𝙴𝙿], q_1, [𝚂𝙴𝙿], …, q_n} and the ground truth topic label y^t={1, …, 1, 0, …, 0}, where the queries from the external conversation are labelled as 1 and the ones from the raw conversation are labelled as 0. Next, we use the perturbed conversation Q̌ as input to the conversational encoder, and obtain the vector representation Ě = {E_[𝙲𝙻𝚂], E^'_1, E_[𝚂𝙴𝙿], …, E^'_k, E_[𝚂𝙴𝙿], E_1, E_[𝚂𝙴𝙿], …, E_n} of the perturbed conversation Q̌. Finally, E_[𝚂𝙴𝙿] is sent to the topic predictor (a linear layer) to decide whether an utterance is from the sampled noise conversation Q^' or not. The binary cross entropy is used to compute topic segmentation loss ℒ_TS: p(y^t_i=1|Q̌) = Sigmoid(W_tE_[𝚂𝙴𝙿]+b_t), ℒ_TS = -y^t_ilog(p(y^t_i=1|Q̌)) - (1-y^t_i)(1-log(p(y^t_i=1|Q̌))), where W_t ∈𝐑^h×1, b_t ∈𝐑, h is the hidden size of model. §.§ Coreference Identification Task In conversational search, a common problem is the coreference, which is that the pronoun in a query usually refers to a term in its previous queries. Most of the existing methods did not explicitly train the model to tackle this problem. Here, we devise an auxiliary self-supervised task that trains the model to predict the referred utterance of the last utterance by the coreference relationship. To obtain which utterance in the conversational context has the coreference relationship with the last utterance, we use the query reformulation corpus to find. We compare the last query in Q with the reformulated query q^*_n by set operations to find the reformulation terms r have been omitted in Q: r = 𝒮(tokenize(q^*_n)) - 𝒮(tokenize(q_n)), where 𝒮 is a set operation that converts a sentence into a non-repeating word set. We can obtain the reformulation terms r by calculating the difference set between two sets. Then r will be used to locate the referred query from back to front until the first query containing the r is found. We mark the position of the referred query to the label y^c={0,0, …, 1, …, 0}, whose i-th value is 1 only if the i-th query is the referred query. Similar to the topic segmentation task (introduced in  <ref>), we send E_[𝚂𝙴𝙿] into a coreference predictor to predict the referred query and use the binary cross-entropy as the loss function of this task: p(y^c_i=1|Q) = Sigmoid(W_cE_[𝚂𝙴𝙿]+b_c), ℒ_CI = -y^c_ilog(p(y^c_i=1|Q)) - (1-y^c_i)(1-log(p(y^c_i=1|Q))), where W_r ∈𝐑^h×1, b_r ∈𝐑 are all trainable parameters. With the coreference identification task, the conversational encoder will pay more attention to the most possible referred query in context when it understands the last query. §.§ Word Reconstruction Task The duality of a one-stage conversational retriever will encode a query to a dense vector. In the previous sections, we use the self-supervised tasks to focus on the utterance of the current topic and the highly related utterance with coreference. However, other utterances may also provide useful information to understand the current search intent. Thus, the conversational encoder should not only gather information from the related utterances but also keep the information from the whole conversational context. To avoid the information vanishing in the final conversational vector representation, we propose to use a simple but efficient reconstruction task to help the conversational encoder to keep the overall semantic information. In this task, we train the model to reconstruct the bag-of-words (BOW) vector of the whole conversation using the representation of [𝙲𝙻𝚂] produced by the conversational encoder. Specifically, all of the words appearing in the context are converted to a BoW vector y^w, y^w = BOW(𝒮(tokenize(Q))), where the length of y^w is the vocab size and y^w_i=1 only if the i-th word in vocab appears in the context otherwise y^w_i=0. We use a linear layer after the last layer of the model to process E_[𝙲𝙻𝚂] and optimize the WR loss based on mean squared error, ŷ^w = Sigmoid(W_wE_[𝚂𝙴𝙿]+b_w), ℒ_WR = ŷ^w-y^w_2, where W_w ∈𝐑^h×|V|, b_w ∈𝐑^|V|, |V| is the vocab size, · means euclidean distance. §.§ Optimization Inspired from the previous studies <cit.>, we also employ the knowledge distillation objective in to accelerate the learning process. Specifically, a pre-trained ad-hoc search encoder TEnc which uses the de-contextualized query as the input and produce the vector representation. We use TEnc as the teacher model and employ a knowledge distillation loss function to train our conversational encoder to mimic the vector representation produced by the teacher encoder TEnc. We formulate the knowledge distillation loss ℒ_KD as follows: E^*_[𝙲𝙻𝚂] = TEnc({[𝙲𝙻𝚂], q^*_n})_[𝙲𝙻𝚂] ℒ_KD = E_[𝙲𝙻𝚂]-E^*_[𝙲𝙻𝚂]_2. where the q^*_n is the manual rewritten query of q_n, (·)_[𝙲𝙻𝚂] means only taking the [𝙲𝙻𝚂] representation of TEnc's last layer output. We make the representation of conversation E_[𝙲𝙻𝚂] to approximate the representation of reformulation query E^*_[𝙲𝙻𝚂] processed by TEnc to distill its powerful retrieval ability. Finally, we combine all the training objective of each self-supervised task and optimize all the parameters in the conversational encoder: ℒ_final= ℒ_KD + αℒ_TS + βℒ_CI + γℒ_WR, where the ℒ_final is the final training objective for , α, β, and γ denotes the hyper-parameter as a trade-off between the self-supervised tasks. § EXPERIMENTAL SETTING §.§ Datasets For fine-tuning the conversational encoder on the conversational search task, we choose two few-shot datasets to evaluate our proposed model based on K-fold cross-validation. CAsT-19 <cit.> is the acronym of the TREC Conversational Assistance Track (CAsT) 2019 benchmark dataset. It is built by human annotators who are required to mimic real dialogues under specified topics and contains frequent coreferences, abbreviations, and omissions. In this work, we pay attention to query de-contextualization and but only the test set provides manual oracle de-contextualized queries. Since the queries in TREC CAsT dataset are used in the conversational search fine-tuning phrase, it will cause the data leaking problem. For a fair comparison, we filter the queries from TREC CAsT from QReCC. The statistics of the filtered QReCC dataset are shown in Table <ref>. CAsT-20 <cit.> refers to next year's TREC CAsT. Its most obvious modification is that the coreference could appear in the response (a summarized answer of gold passage)compared with CAsT-19, where a query only refers to its previous queries. Both manual response and automatic response (generated by neural rewriter <cit.>) are provided in CAsT-20. It contains 216 queries in 25 dialogues which have de-contextualized queries and most of queries have relevance judgments. Additionally, CAsT-20's corpus is the same as CAsT-19's. Detailed statistics are shown in Table <ref>. §.§ Baselines Following <cit.>, we split baselines into two categories: sparse retrieval methods and dense retrieval methods respectively. Sparse retrieval methods rewrite the contextualized query to a context-independent query and use the ad-hoc sparse retriever to obtain the results. The dense retrieval methods use the ad-hoc dense retriever or directly encode the conversational queries via a conversational dense retriever. ∙ denotes simply using the last context-independent query in the dense or sparse retriever to retrieve the documents. ∙  <cit.> is a query rewriting method which inherits from GPT-2 <cit.> and fine-tunes on CANARD dataset <cit.>. Then it employs the ad-hoc retriever to search using the rewritten query. ∙  <cit.> is a data augmentation method that first generates query reformulation data using large amounts of ad-hoc search sessions based on rules and self-supervised learning. Then the automatically generated data is used to train the query rewriter. ∙  <cit.> deals with the query reformulation task as a binary term classification problem. It will decide whether to add terms appearing in the dialogue history to the current turn query or not. ∙  <cit.> employs a well-trained ad-hoc search encoder TCT-ColBERT <cit.>. It uses the mean-pooling method to get the contextual embedding and fine-tunes on pseudo-relevance labels. ∙  <cit.> develops the few-shot learning method to train the conversational dense retriever. It takes ANCE <cit.> as the teacher model to teach the conversational student model. Integrating the distilling loss and ranking loss, it obtains a pretty performance on the few-shot dataset. ∙  <cit.> further introduces the curriculum denoising to inhibit the unhelpful turns in context. An additional two-step multi-task learning improves the performance of . ∙  <cit.> trains on two large automatically generated conversational search dataset WikiDialog(11.4M dialogues) and WebDialog(8.4M dialogues) from a T5-large encoder checkpoint. Otherwise, it further warm-ups on the QReCC dataset. Though it does not fine-tune on CAsT-19 (50 dialogues) and CAsT-20 (25 dialogues), the extremely time-consuming training procedure makes its performance up to a stable level. §.§ Evaluation Metrics Following the previous works on conversational search, we evaluate all models based on Mean Reciprocal Rank (MRR) and Normalized Discounted Cumulative Gain @3 (NDCG@3). MRR deems the ranking reciprocal of a positive sample as its score and counts the average of all samples. It is a simple yet effective metric for ranking tasks. NDCG@3 considers the importance of positive samples based on their relevance and chooses scores of the top 3 samples to normalize. The statistical significance of two runs is tested using a two-tailed paired t-test and is denoted using † and for significance (p ≤ 0.05) and strong significance (p ≤ 0.01). §.§ Implementation Details Most settings in this work are similar to ConvDR <cit.>. We employ the ad-hoc retriever ANCE <cit.> as the teacher module to calculate the knowledge distillation loss. Following previous conversational search work, for CAsT-19, we concatenate the historical query and the current query as the model inputs, and we additionally take account of the historical responses for CAsT-20. The leading words in the conversational context will be truncated if the concatenation length exceeds a maximum length, which is 256 and 512 for CAsT-19 and CAsT-20 respectively. We implement experiments using PyTorch and Transformers library on an NVIDIA A40 GPU. Adam optimizer is employed with the learning rate of 2e-5 and batch size of 64 for CAsT-19 and 32 for CAsT-20. Our model will post-train 2 epochs and then fine-tune on the conversational search corpus. The self-supervised task weights α, β and γ are set as 1e-2, 1e-3, 1e-2 for CAsT-19 and 1e-1, 2e-3, 2e-2 for CAsT-20. We use faiss <cit.> to index the passages, whose representations are generated by ANCE and fixed. Following the TREC Conversational Assistance competition official evaluation setting, we use relevance scale ≤ 2 as positive for CAsT-19 and relevance scale ≤ 1 for CAsT-20 and obtain our result based on official evaluation scripts. § EVALUATION RESULT §.§ Overall Performance We compare our model with all baselines in Table <ref>. We can find that the sparse methods generally achieve less satisfying performance than the dense conversational methods, which demonstrates the dense methods can understand the search intent of users better. Our model performs consistently better on two datasets than other sparse and dense conversational search models with improvements of 1.4% and 0.4% on the CAsT-19 dataset and achieves 7.1% and 6.7% improvements on the CAsT-20 dataset compared with in terms of MRR, and NDCG@3 respectively. This demonstrates that our proposed self-supervised tasks provide a useful training signal for the conversational encoder module than the simple parameter warm-up method used in previous methods. In Table <ref>, we find that outperforms on CAsT-19 in terms of NDCG@3. The possible reason is that <cit.> has illustrated that introduces a stronger query encoder TCT-ColBERT <cit.> and it takes multi-stage methods to train their conversational encoder. In contrast to the complexity of the multi-stage method, our can boost the performance of the existing conversational search model in an end-to-end manner which is easier to train and deploy in real-world applications. We will leave adapting this stronger encoder TCT-ColBERT into the post-training paradigm in our future work. To verify the generalization ability of , we equip our proposed to two strong conversational search methods ( and ), which can provide a better conversational context encoder. From the comparison between and , and , we can find that our proposed new post-train paradigm can adapt to different conversational search models and boost their performance, which demonstrate the effectiveness and generalization ability of our proposed . §.§ Ablation Study We remove each self-supervised task to analyze the effectiveness of each component, and TS is the acronym for topic segmentation, CI denotes the coreference identification and WR denotes word reconstruction. The performance of ablation models is shown in Table <ref>, and we can find that all of the ablation models perform less promising than the best model , which demonstrates the preeminence of each self-supervised task in . We ablate the topic segmentation task in and observe the decline in search performance. The topic segmentation task helps the model identify the topic boundary in the long session and pay more attention to the utterances in the related topics This makes the retrieval performance raises 3.6% and 2.5% in terms of MRR on the CAsT-19 and CAsT-20 datasets respectively. In the method , we remove the coreference identification self-supervised task and the performance of this ablation model dropped dramatically, which demonstrates that it plays the most important role in . The experiment shows that our achieves 4.1% and 1.7% increments compared with in terms of MRR score on the CAsT-19 and CAsT-20 datasets. We also remove the word reconstruction task yielding , and the dropped score shows that it is effective to keep the contextual semantic in the context representation. All of our self-supervised tasks, which provide extra supervision signals to understand dialog structure and prevent the semantic vanishing, help achieves the best performance according to the experimental results. §.§ Robustness of Topic Segmentation To verify the effectiveness of the topic segmentation of our method, we conduct an experiment that concatenates different lengths of randomly sampled utterances to the beginning of the current conversation session. In this experiment, we use the as our baseline. Figure <ref> shows the search performance of our and with different length of random sampled noise utterances as input. From Figure <ref>, we find that our is more robust to concatenate more random sampled utterances. When we concatenate more random sampled utterances, the performance of dropped dramatically while slightly dropped in the beginning and kept stable. The reason for this phenomenon lies in that our model can identify the topic segmentation boundary and reduce the impact of unrelated utterances when encoding the current conversational query. This demonstrates that the topic segmentation helps the model focus on the utterances of relevant topics. §.§ Case Study We show three cases in Table <ref> to intuitively understand how our self-supervised tasks of improve the performance of the existing conversational search methods. In the first case, , which equally treats every historical query, struggles with the long dialogue history and retrieves the irrelevant passage. After incorporating , the topic segmentation makes split out several most related utterances in conversational history. With the help of modeling the topic boundary, it easily discovers that “throat cancer” is the referred term for the current query. In the second case, due to the complex historical queries, is confused about whether the “ones” in the last query means “database” or “real-time database” and results in a unrelated retrieved passage. Our proposed coreference identification task makes bypass these obstructions and straightly point out the referred query, and successfully finds the accuracy result. The contextual semantic vanishing will harm the performance since the incomplete contextual semantics cannot accurately represent the search intent. In the last case, it makes misunderstand the meaning of “avoid” in the current query to “recover”. Then its retrieved passage mainly illustrates “how to recover from sports injuries”. The word reconstruction demonstrates its effectiveness and keeps the semantic information of “avoid”, which is indispensable during representation learning. The complete contextual semantic leads to more accurate retrieval. §.§ Parameter Tuning In this section, we analyze how much the hyper-parameters α, β, and γ influence the retrieval performance and explore the best setting of hyper-parameters. We design five-group experiments for each parameter and each dataset and the performance comparison as Figure <ref>. We find that the performance of slightly drops when the parameter changes, and this demonstrates the hyper-parameter robustness of . Finally, we determine the best setting of α, β, and γ to be 1e-2, 1e-3, 1e-2 for CAsT-19 and 1e-1, 2e-3, 2e-2 for CAsT-20. § CONCLUSION In this work, we propose a novel framework for conversational search, which could easily be applied to existing methods and boost their performance. Different from the conventional warm-up method, our proposed introduces three self-supervised tasks to better initialize the conversational encoder. These extra supervision signals guide the model to understand complex conversational structure and effectively prevent contextual semantic vanishing. Extensive experiments conducted on two benchmark datasets prove the effectiveness of , which improves previous methods and achieves the best performance. Extra analytical experiments further answer why our self-supervised tasks could improve performance. § LIMITATIONS Despite we largely improve the performance of the existing conversational search method, the mechanism of the self-supervised tasks in our is simple and intuitive. Additionally, our post-training method relies on the external query reformulation dataset, which is a compromise under the scarcity of conversational search data. However, the essential contribution of this work is that we point out the significance of modeling dialogue structure (especially for topic shift), and the phenomenon of contextual semantic vanishing in conversational search for the first time. We hope future works could pay more attention to these problems and devise more complex methods to develop more powerful conversational search systems. § ACKNOWLEDGEMENTS We would like to thank the anonymous reviewers for their constructive comments. This work was supported by National Natural Science Foundation of China (NSFC Grant No. 62122089 & No. T2293773), Beijing Outstanding Young Scientist Program NO. BJJWZYJH012019100020098, and Intelligent Social Governance Platform, Major Innovation & Planning Inter-disciplinary Platform for the ”Double-First Class” Initiative, Renmin University of China. acl_natbib § POST-TRAINING DATASET Followed by the existing conversational dense retrieval methods, we also use the query reformulation dataset for our proposed model. QReCC <cit.> is a query rewriting dataset which contains 14K conversations. The queries in QReCC are collected from three sources: TREC CAsT <cit.>, QuAC <cit.> and NQ <cit.>. The queries in NQ were used as prompts to create conversational queries. We notice that the queries in TREC CAsT dataset are used in the conversational search fine-tune phrase, it will cause the data leaking problem. For fair comparison, we filter the queries from TREC CAsT from QReCC. The statistics of the filtered QReCC dataset are shown in Table <ref>.
http://arxiv.org/abs/2307.03171v1
20230706175229
LEO: Learning Efficient Orderings for Multiobjective Binary Decision Diagrams
[ "Rahul Patel", "Elias B. Khalil" ]
cs.AI
[ "cs.AI" ]
Data processing of Visible Emission Line Coronagraph Onboard ADITYA–L1 C. Kathiravan, R. Ramesh August 1, 2023 ====================================================================== Approaches based on Binary decision diagrams (BDDs) have recently achieved state-of-the-art results for multiobjective integer programming problems. The variable ordering used in constructing BDDs can have a significant impact on their size and on the quality of bounds derived from relaxed or restricted BDDs for single-objective optimization problems. We first showcase a similar impact of variable ordering on the Pareto frontier () enumeration time for the multiobjective knapsack problem, suggesting the need for deriving variable ordering methods that improve the scalability of the multiobjective BDD approach. To that end, we derive a novel parameter configuration space based on variable scoring functions which are linear in a small set of interpretable and easy-to-compute variable features. We show how the configuration space can be efficiently explored using black-box optimization, circumventing the curse of dimensionality (in the number of variables and objectives), and finding good orderings that reduce the enumeration time. However, black-box optimization approaches incur a computational overhead that outweighs the reduction in time due to good variable ordering. To alleviate this issue, we propose , a supervised learning approach for finding efficient variable orderings that reduce the enumeration time. Experiments on benchmark sets from the knapsack problem with 3-7 objectives and up to 80 variables show that  is ∼30-300% and ∼10-200% faster at enumeration than common ordering strategies and algorithm configuration. Our code and instances are available at <https://github.com/khalil-research/leo>. § INTRODUCTION In many real-world scenarios, one must jointly optimize over a set of conflicting objectives. For instance, solving a portfolio optimization problem in finance requires simultaneously minimizing risk and maximizing return. The field of multiobjective optimization deals with solving such problems. It has been successfully applied in novel drug design <cit.>, space exploration <cit.>, administrating radiotherapy <cit.>, supply chain network design <cit.>, among others. In this paper, we specifically focus on multiobjective integer programming (MOIP), which deals with solving multiobjective problems with integer variables and linear constraints. The goal of solving a multiobjective problem is to find the Pareto frontier (): the set of feasible solutions that are not dominated by any other solution, i.e., ones for which improving the value of any objective deteriorates at least one other objective. The  solutions provide the decision-maker with a set of trade-offs between the conflicting objectives. Objective-space search methods iteratively solve multiple related single-objective problems to enumerate the but suffer from redundant computations in which previously found solutions are encountered again or a single-objective problem turns out infeasible. On the other hand, decision-space search methods leverage branch-and-bound. Unlike the single-objective case where one compares a single scalar bound (e.g., in mixed-integer linear programming (MIP)), one needs to compare bound sets to decide if a node can be pruned; this in itself is quite challenging. Additionally, other crucial components of branch-and-bound such as branching variable selection and presolve are still underdeveloped, limiting the usability of this framework. Binary decision diagrams (BDDs) have been a central tool in program verification and analysis <cit.>. More recently, however, they have been used to solve discrete optimization problems <cit.> that admit a recursive formulation akin to that of dynamic programming. BDDs leverage this structure to get an edge over MIP by efficiently encoding the feasible set into a network model which enables fast optimization. To the best of our knowledge, <cit.> were the first to use BDDs to solve multiobjective problems, achieving state-of-the-art results for a number of problems. The used to construct a BDD has a significant impact on its size and consequently any optimization of the diagram. However, the problem within BDD-based MOIP has not been addressed in the literature. We address this gap by designing a novel learning-based BDD technique for faster enumeration of . We begin with the following hypothesis: has an impact on the enumeration time and an “efficient" can reduce it significantly. Following an empirical validation of this hypothesis, we show that such orderings can be found using black-box optimization, not directly in the (exponentially large) space of variable orderings, but rather indirectly in the space of constant-size variable scoring functions. The scoring function is a weighted linear combination of a fixed set of variable properties (or attributes), and the indirect search is in the space of possible weight combinations. <Ref> illustrates how a search in the property-weight space can alleviate the curse of dimensionality to make VO search scalable to problems with many variables. However, solving the black-box optimization problem may be prohibitively time-consuming for any one instance. For variable ordering to be useful in practice, the time required to produce a good VO should be negligible relative to the actual PF enumeration time. To alleviate this issue, we train a supervised machine learning (ML) model on the orderings collected using black-box optimization. A trained model can then be used on unseen (test) instances to predict variable orderings. Should such a model generalize well, it would lead to reduced enumeration times. We refer to our approach as  (Learning Efficient Orderings). Our key contributions can be summarized as follows: * We show that variable ordering can have a dramatic impact on solving times through a case study of the multiobjective knapsack, a canonical combinatorial problem. * We show how black-box optimization can be leveraged to find efficient variable orderings at scale. * We design a supervised learning framework for predicting variable orderings which are obtained with black-box optimization on a set of training instances. Our ML models are invariant to permutations of variables and independent of the number of variables, enabling fast training and the use of one ML model across instances of different sizes. * We perform an extensive set of experiments on the knapsack problem and show that is ∼30-300% and ∼10-200% faster than the best non-ML ordering strategy and the SMAC algorithm configuration tool, respectively. * We perform a feature importance analysis of the best class of ML models we have found, extreme gradient boosted ranking trees. The analysis reveals that: (a) a single ML model can be trained across instances with varying numbers of objectives and variables; and (b) a single knapsack-specific feature that we had not initially contemplated performs reasonably well on its own, though far worse than our ML models. § PRELIMINARIES §.§ Multiobjective Optimization An MOIP problem takes the form ℳ:= min_x{z̅(x): x ∈𝒳, 𝒳⊂ℤ^n_+ }, where x is the decision vector, 𝒳 is a polyhedral feasible set, and z̅: ℝ^n →ℝ^p a vector-valued objective function representing the p objectives. In this work, we focus on the knapsack problem with binary decision variables, hence 𝒳⊂{0,1}^n. Definition: Pareto dominance. Let x^1, x^2 ∈𝒳, y̅^1 = z̅(x^1), y̅^2 = z̅(x^2), then y̅^1 dominates y̅^2 if y̅^1_j ≤y̅^2_j, ∀ j ∈ [p] and ∃ j∈[p]: y̅^1_j < y̅^2_j. Definition: Efficient set. A solution x^1 ∈𝒳 is called an efficient solution if ∄ x^2 ∈𝒳 such that x^2 dominates x^1. The set of all efficient solutions to a multiobjective problem is called an efficient set 𝒳_E. Definition: Pareto frontier (). The set of images of the efficient solutions in the objective space, i.e., 𝒵_N = {z̅(x): x ∈𝒳_E}, is called the . The exact solution approaches to solving multiobjective problems focus on efficiently enumerating its . §.§ BDDs for Multiobjective Optimization A BDD is a compact encoding of the feasible set of a combinatorial optimization problem that exploits the recursive formulation of the problem. Formally, a BDD is a layered acyclic graph G=(n, 𝒩, 𝒜, ℓ, d), composed of nodes in 𝒩, arcs in 𝒜, a node-level mapping ℓ: 𝒩→ [n+1] that maps each node to a decision variable, and an arc-label mapping d: 𝒜→{0, 1} that assigns a value from the variable domain of an arc's starting node to the arc. Here, n is the number of variables of the multiobjective problem ℳ. The nodes are partitioned into n+1 layers L_1, …, L_n+1, where L_l={u: ℓ(u) = l, u ∈𝒩}. The first and last layers have one node each, the root 𝐫 and terminal nodes 𝐭, respectively. The width of a layer L_l is equal to the number of nodes in that layer, |L_l|. The width of a BDD G is equal to the maximum layer width, max_l ∈ [n+1]|L_l|. An arc a:=(r(a), t(a)) ∈𝒜 starts from the node r(a) ∈ L_l and ends in node t(a) ∈ L_l+1 for some l ∈ [n]. It has an associated label d(a) ∈{0, 1} and a vector of values v̅(a) ∈ℝ^p_+ that represents the contribution of that arc to the p objective values. Let 𝒫 represent all the paths from the root to the terminal node. A path e(a_1, a_2, …, a_n) ∈𝒫 is equal to the solution x=(d(a_1),d(a_2), …, d(a_n)) and the corresponding image of this solution in the objective space is given by v̅(e) = ∑_i=1^nv̅(a_i), where the sum is taken elementwise over the elements of each vector v̅(a_i). The BDD representation of ℳ is valid if 𝒵_N = ( ⋃_e ∈𝒫v̅(e) ), where 𝒵_N is the of ℳ and an operator to filter out dominated objective vectors. We refer the readers to <cit.> and <cit.> for the detailed description of the multiobjective knapsack BDD construction. In what follows, we assume access to a BDD construction and enumeration procedure and will focus our attention on the variable ordering aspect. § RELATED WORK §.§ Exact Approaches for Solving MOIP Problems Traditional approaches to exactly solve MOIP can be divided into objective-space search and decision-space search methods. The objective-space search techniques <cit.> enumerate the by searching in the space of objective function values. They transform a multiobjective problem into a single one either by weighted sum aggregation of objectives or transforming all but one objective into constraints. The decision-space search approaches <cit.> instead explore the set of feasible decisions. Both these approaches have their own set of challenges as described in the introduction. We point the reader to <cit.> for a more detailed background. <cit.> showcased that BDD-based approaches can leverage problem structure and can be orders of magnitude faster on certain problem classes; the use of valid network pruning operations along with effective network compilation techniques were the prime factors behind their success. However, they do not study the effect of on the enumeration time. §.§ Variable Ordering for BDD Construction Finding a that leads to a BDD with a minimal number of nodes is an NP-Complete problem <cit.>. This problem has received significant attention from the verification community as smaller BDDs are more efficient verifiers. Along with the size, also affects the objective bounds; specifically, smaller exact BDDs are able to obtain better bounds on the corresponding limited width relaxed/restricted BDDs <cit.>. The techniques for BDD construction can be broadly categorized as exact or heuristic. The exact approaches to <cit.>, though useful for smaller cases, are not able to scale with problem size. To alleviate the issue of scalability for larger problems, heuristic techniques are used; the proposed methodology falls into this category. These heuristics can be general or problem-specific <cit.> but the literature has not tackled the multiobjective optimization setting. A VO method can also be classified as either static or dynamic. Static orderings are specified in advance of BDD construction whereas dynamic orderings are derived incrementally as the BDD is constructed layer-by-layer. The latter may or may not be applicable depending on the BDD construction algorithm; see <cit.> for a discussion of this distinction for Graph Coloring. We focus on static orderings and discuss extensions to the dynamic setting in the Conclusion. We focus on ML-based heuristics for as they can be learned from a relevant dataset instead of handcrafted heuristics developed through a tedious trial-and-error process. §.§ Machine Learning for Variable Ordering <cit.> proposed a learning-based algorithm to construct smaller BDDs for model verification. In particular, they create random orders for a given instance in the training set and tag variable pairs based on their impact on the resulting BDD size. Using this data, they learn multiple pair precedence classifiers. For a new instance at test time, they query each trained pair precedence classifier to construct a precedence table. These tables are merged into one to derive the ordering. The success of this method hinges on the selective sampling of informative variable pairs and the ability to generate orders with sufficient variability in BDD size to increase the chance of observing high-quality labels. As they leverage problem-specific heuristics for selective sampling and rely on random sampling for generating labels, this method is not applicable to our setting. Specifically, we do not have a notion of how informative a given variable pair is. Additionally, random orders may not produce high variability in enumeration time as evidenced by our experiments. <cit.> uses active learning to address the problem for BDDs used in program analysis with the goal of minimizing the run time, similar to that of . However, certain differences make it less amenable to our setting. Specifically, the technique to generate the candidate s in <cit.> is grounded in program analysis and cannot be applied to our problem. Instead, leverages bayesian optimization through in conjunction with the novel property-weight search space to generate candidates. <cit.> use an evolutionary search approach to learn a single variable ordering heuristic for a set of instances. The learned heuristic is a sequence of BDD operations (e.g., swapping two variables in an ordering) applied to instances from circuit design and verification, where the BDD represents a boolean function that must be verified efficiently. This approach can be seen as one of algorithm configuration, which we will compare to using SMAC and ultimately outperform. The work of <cit.> is the first learning-based method to address problem for BDDs used in solving discrete optimization problems. Specifically, they learn a policy to order variables of relaxed/restricted BDDs to obtain tighter bounds for the underlying optimization problem using reinforcement learning (RL). A key component of training an RL policy is devising the reward that the agent receives after taking an action in a given state and moving to the next state. Note that we care about reducing the time to enumerate the , which is only available at the end of a training episode. The absence of intermediate rewards in our setting makes RL inapplicable. <cit.> developed an algorithm portfolio approach to select the best strategy from a set of alternatives when constructing relaxed decision diagrams for the (single-objective) graph coloring problem. Fundamental to a portfolio-based approach is the existence of a set of strategies, such that each one of them is better than the others at solving some problem instances. However, such an algorithm portfolio does not exist in our case and part of the challenge is to discover good ordering strategies. There have been some recent contributions <cit.> relating to solving multiobjective problems using deep learning and graph neural networks. However, these approaches are not exact and thus beyond the scope of this paper. We discuss potential extensions of  to the inexact setting in the Conclusion. § METHODOLOGY The proposed methodology, as depicted in <Ref>, is divided into three phases. We apply our technique to the multiobjective knapsack problem (MKP) which can be described as: max_x ∈{0, 1}^n{{∑_i ∈ [n] a^p_i x_i }_p=1^P : ∑_i ∈ [n] w_i x_i ≤ W }. Here, n is the number of items/variables, w_i and a^p_i are the weight and profit corresponding to each item i ∈ [n] and objective p ∈ [P]. Finally, the capacity of the knapsack is W∈ℤ_+. Next, we define an efficient variable ordering, a concept which will be useful in describing . Let 𝒪 be the set of all possible variable orderings and Γ(o), o ∈𝒪 denote the enumeration time over a BDD constructed using order o. Let o^⋆≡_o ∈𝒪Γ(o) be the optimal . Finding o^⋆ among all n! possible permutations of n variables is intractable, so we will aim for an efficient variable ordering () o^e that is as close as possible to o^⋆. Note that our approach is heuristic and does not come with approximation guarantees on the enumeration time of o^e relative to o^⋆. The objective in the first phase is to find, for each training instance, an EVO that acts as a label for supervising an ML model. In the second phase, each training instance is mapped to a set of features and an ML model is trained. Finally, we perform model selection and use the chosen model to predict EVOs, referred to as ô^e, that are then used to construct a BDD and compute the for any test instance. §.§ Phase 1: Finding an EVO Since finding an optimal that minimizes the enumeration time is NP-complete, we devise a heuristic approach for finding an , o^e. To find o^e for a given instance ℐ, we use black-box optimization as the run time Γ^ℐ cannot be described analytically and optimized over. A naive approach to find o^e would be to search directly in the variable ordering space, as suggested in Approach #1 of <Ref>. While this might work well for tiny problems, it will not scale with the increase in the problem size as there are n! possible orderings. To alleviate this issue, we define a surrogate search space that removes the dependence on the problem size. Specifically, we introduce score-based variable ordering where we order the variables based on the decreasing order of their total score. For a given problem class, we define a set of properties 𝒦 for its variables that capture some problem structure. For example, in an MKP, the weight of an item can act as property. <Ref> lists all properties of a variable of an MKP. Let g_ik be the property score of variable i for some property k ∈𝒦, w̅ = (w_1, ⋯, w_k) be the property weights in [-1, 1]^|𝒦|. Then, the score of a variable i is defined as s_i ≡∑_k ∈𝒦 w_k ·g_ik/∑_i ∈ [n]g_ik. We recover the variable ordering by sorting them in decreasing order of their score. Thus, as depicted in Approach #2 in <Ref>, the search for o^e is conducted in the surrogate search space [-1, 1]^|𝒦|, which only depends on |𝒦| and not on the number of variables n. Note that defining the search space in this manner gives an additional layer of dynamism in the sense that two instances with the same property weights can have different variable orders. With a slight abuse of notation, let Γ(w̅) represent the time taken to enumerate the using a obtained by property weights w̅. Given a problem instance, the black-box optimizer controls the property weights and the BDD manager computes the enumeration time based on the derived from these property weights. The black-box optimizer iteratively tries different property weight configurations, maintaining a list of incumbents, a process that is depicted in Phase 1 of <Ref>. We propose to use the variable ordering obtained from the best incumbent property weight as the label for learning task. §.§ Phase 2: Dataset Generation and Model Training In this phase, we begin by generating the training dataset. We give special attention to designing our features such that the resulting models are permutation-invariant and independent of the size of the problem. Note that instead of this feature engineering approach, one could use feature learning through graph neural networks or similar deep learning techniques, see <cit.> for a survey. However, given that our case study is on the knapsack problem, we opt for domain-specific feature engineering that can directly exploit some problem structure and leads to somewhat interpretable ML models. Suppose we are given J problem instances, each having n_j variables, j∈[J]. Let α_ij denote the features of variable i∈[n_j] and β_j denote the instance-level context features. Using the features and the EVO computed in Phase 1, we construct a dataset 𝒟= {(α_ij, β_j, r_i(o^e_j): i ∈ [n_j] : j ∈ [J]}. Here, o^e_j is the EVO of an instance j and r_i: ℤ^n_j_+ →ℤ_+ a mapping from the EVO to the rank of a variable i. For example, if n_j = 4 for some instance j and o^e_j = (2, 1, 4, 3), then r_1( o^e_j)=3, r_2( o^e_j)=4, r_3( o^e_j)=1, and r_4( o^e_j)=2. For a complete list of variable and context features, refer to <Ref>. Learning-to-rank (LTR) is an ML approach specifically designed for ranking tasks, where the goal is to sort a set of items based on their relevance given a query or context. It is commonly used in applications such as information retrieval. We formulate the task of predicting the EVO as an LTR task and use the pointwise and pairwise ranking approaches to solve the problem. In the pointwise approach, each item (variable) in the training data is treated independently, and the goal is to learn a model that directly predicts the relevance score or label for each variable. This is similar to solving a regression problem with a mean-squared error loss. Specifically, we train a model f_θ(α_ij, β_j) to predict r_i(o^e_j). Once the model is trained, it can be used to rank items based on their predicted scores ô^e. However, this approach does not explicitly consider the relationships between variables in its loss function. The pairwise approach aims to resolve this issue <cit.>. Let 𝒯_j = {(i_1, i_2): r_i_1(o^e_j) > r_i_1(o^e_j))} be the set of all variable tuples (i_1, i_2) such that i_1 is ranked higher than i_2 for instance j. Then, the goal is to learn a model that maximizes the number of respected pairwise-ordering constraints. Specifically, we train a model g_ϕ(·) such that the number of pairs (i_1, i_2) ∈𝒯_j for which g_ϕ(α_i_1j, β_j) > g_ϕ(α_i_2j, β_j) is maximized. This approach is better equipped to solve the ranking problem as the structured loss takes into account the pairwise relationships. §.§ Phase 3: Model Selection and Testing We follow a two-step approach to perform model selection as the ranking task is a proxy to the downstream task of efficiently solving a multiobjective problem. Firstly, for each model class (e.g., decision trees, linear models, etc.), we select the best model based on Kendall's Tau <cit.>, a ranking performance metric that measures the fraction of violated pairwise-ordering constraints, on instances from a validation set different from the training set. Subsequently, we pit the best models from each type against one another and select the winner based on the minimum average enumeration time on the validation set. Henceforth, for previously unseen instances from the test set, we will use the model selected in Phase 3 to predict the EVO and then compute the . § COMPUTATIONAL SETUP Our code and instances are available at <https://github.com/khalil-research/leo>. All the experiments reported in this manuscript are conducted on a computing cluster with an Intel Xeon CPU E5-2683 CPUs. We use <cit.> – a black-box optimization and algorithm configuration library – for generating the labels. The ML models are built using Python 3.8, Scikit-learn <cit.>, and XGBoost <cit.>, and SVMRank <cit.>. The “BDD Manager” is based on the implementation of <cit.>, which is available at <https://www.andrew.cmu.edu/user/vanhoeve/mdd/>. §.§ Instance Generation We use a dataset of randomly generated MKP instances as described in <cit.>. The values w_i and a^p_i are sampled randomly from a discrete uniform distribution ranging from 1 to 100. The capacity W is set to ⌈ 0.5 ∑_i ∈ I w_i ⌉. We generate instances with sizes 𝒮 = {(3, 60), (3, 70), (3, 80), (4, 50), (5, 40), (6, 40), (7, 40)}, where the first and second component of the tuple specify the number of objectives and variables, respectively. For each size, we generate 1000 training instances, 100 validation instances, and 100 test instances. §.§ Instrumenting SMAC As a labeling tool: To generate EVOs for the learning-based models, we use  <cit.>. In what follows, refers to the use of SMAC as a black-box optimizer that finds an EVO for a given training instance. Specifically, solves w̅^e_j = min_w̅Γ_j(w̅) for each instance j in the training set; we obtain o^e_j by calculating variable scores – a dot product between w̅^e_j and the corresponding property value – and sorting variables in the decreasing order of their score. As a baseline: The other more standard use of SMAC, which we refer to as , is as an algorithm configuration tool. To obtain a ordering, we use to solve w̅^e_D = min_w̅𝔼_j ∼ [J][Γ_j(w̅)] and obtain an order o^e_D_j for instance j using single property weight vector w̅^e_D. The expectation of the run time here simply represents its average over the |J| training instances. Note that we get only one property weight vector for the entire dataset in the case rather than one per instance as in the case. However, we obtain an instance-specific VO when using the configuration as the underlying property values change across instances. Initialization: For both uses of SMAC, we use the ordering as a warm-start by assigning all the property weights to zero except the weight property, which is set to -1. This reduces the need for extensive random exploration of the configuration space by providing a reasonably good ordering heuristic. Random seeds: As SMAC is a randomized optimization algorithm, running it with multiple random seeds increases the odds of finding a good solution. We leverage this idea for as its outputs will be used as labels for supervised learning and are thus expected to be of very high quality, i.e., we seek instance-specific parameter configurations that yield variable orderings with minimal solution times. We run with a single seed and use its average run time on the training set as a target to beat for . Since the latter optimizes at the instance level, one would hope it can do better than the distribution-level configuration of . As such, for , we run on each instance with 5 seeds for all sizes except (5, 40) and (6, 40), for which we used one seed. We start with one seed, then average the run time of the best-performing seed per instance to that of the average enumeration time of on the training set, and relaunch with a new seed until a desired performance gap between and is achieved. Computational budget: In the setting, we run with a 12-hour time limit, whereas in the case the time limit is set to 20 minutes per instance except for sizes (3, 60), (4, 50), (5, 40), for which it is set to 5 minutes per instance. In both settings, runs on a 4-core machine with a 20GB memory limit. It can be observed that generating the labels can be computationally expensive. This dictates the choice of sizes in the set of instance sizes, 𝒮. Specifically, we select instance sets with an average running time of at most 100 seconds (not too hard) and at least 5 seconds (nor too easy) using the top-down compilation method described in <cit.>. §.§ Learning Models We use linear regression, ridge regression, lasso regression, decision trees, and gradient-boosted trees (GBT) with mean-squared error loss to build size-specific pointwise ranking models. Similarly, we train support vector machines and GBT with pairwise-ranking loss to obtain pairwise ranking models for each size. In the experimental results that follow, the best learning-based method will be referred to as . This turns out to be GBT trained with pairwise-ranking loss. The GBT models that were selected achieved a Kendall's Tau ranging between 0.67 to 0.81 across all problem sizes on the validation set. The model selection follows the procedure mentioned in Phase 3 of the Methodology section. In terms of features, we omit the context features in <Ref> when training linear size-specific models, as these features take on the same value for all variables of the same instance and thus do not contribute to the prediction of the rank of a variable. The context features are used with non-linear models such as decision trees and GBT. We also train two additional GBT models – ML+A and ML+AC – with pairwise-ranking loss, on the union of the datasets of all sizes. In particular, ML+A is trained with only variable features, similar to , whereas ML+AC adds the instance context features to the variable features. §.§ Baselines To evaluate the performance of learning-based orderings, we compare to four baselines: – uses the (arbitrary) default variable ordering in which the instance was generated. – orders the variables in increasing order of their weight values, w_i. This is a commonly used heuristic for solving the single-objective knapsack problem. – orders the variables in decreasing order of the property min-value-by-weight detailed in <Ref>, which is defined as min{a^p_i}_p=1^P / w_i. This rule has an intuitive interpretation: it prefers variables with larger worst-case (the minimum in the numerator) value-to-weight ratio. It is not surprising that this heuristic might perform well given that it is part of a 1/2-approximation algorithm for the single-objective knapsack problem <cit.>. – , as described in an earlier paragraph. It produces one weight setting for the property scores per instance set. This baseline can be seen as a representative for the algorithm configuration paradigm recently surveyed by <cit.>. § EXPERIMENTAL RESULTS We examine our experimental findings through a series of questions that span the impact of on PF enumeration time (Q1), the performance of  as a black-box optimizer that provides labels for ML (Q2), the performance of  on unseen test instances and comparison to the baselines (Q3, Q4), a feature importance analysis of the best ML models used by  (Q5), and an exploration of size-independent, unified ML models (Q6). §.§.§ Q1. Does variable ordering impact the Pareto frontier enumeration time? To test the hypothesis that VO has an impact on the PF enumeration time, we compare the run time of ten heuristic orderings against the expected run time of random orderings. By the run time of an ordering, we mean the time taken to compute the PF on a BDD constructed using that particular ordering. The heuristic orderings are constructed using the variable properties. For example, the heuristic ordering called max_avg-value-by-weight sorts the variables in descending order of the ratio of a variable's average value by its weight. We estimate the expected run time of a random ordering by sampling 5 heuristic variable orderings uniformly at random from all possible n! orderings and averaging their run time. <Ref> summarizes the results of this experiment. We work with 250 MKP instances of the problem sizes (3, 20), (3, 40), (3, 60), (3, 80), (5, 20), (5, 30), (5, 40), (7, 20), (7, 30), and (7, 40). The values in the table are the ratios of the average run time of a heuristic ordering to that of the 5 random orderings; values smaller than one indicate that a heuristic ordering is faster than a random one, on average. The best heuristic ordering for each size is highlighted. First, it is clear that some heuristic orderings consistently outperform the random ones across all problem sizes (min_weight, max_min-value), and by a significant margin. In contrast, some heuristic orderings are consistently worse than random. For example, the heuristic ordering max_weight, min_avg-value, and min_min-value consistently underperform when the number of variables is more than 20. Second, the choice of as a baseline was motivated by the results of this experiment as min_weight wins on most sizes as highlighted in <Ref>. Altogether, this experiment validates the hypothesis that has an impact on the PF enumeration time and that there exists that can significantly reduce this time. §.§.§ Q2. Can  find good variable orderings for training instances? Having established in Q1 that the search for high-quality variable orderings is justified, we now turn to a key component of Phase 1: the use of  as a black-box optimizer that produces “label” variable orderings for subsequent ML in Phase 2. Figure <ref> shows the average run time, on the training instances, of the incumbent property-weight configurations found by . These should be interpreted as standard optimization convergence curves where we seek small values (vertical axis) as quickly as possible (horizontal axis). Indeed,  performs well, substantially improving over its initial warm-start solution (the  ordering). The flat horizontal lines in the figure show the average run time of the single configuration found by . The latter is ultimately outperformed by the  configurations on average, as desired. §.§.§ Q3. How does perform in comparison to baseline methods? <Ref> presents the enumeration across different problem sizes and methods on the test set. In this question, we will examine only the first of three ML models, namely the first one referred to as  in the table; the two other models will be explored in Q7. We can observe that consistently outperforms all baselines in terms of the geometric mean (GMean) of the enumeration time, across all problem sizes. acts as a strong baseline for the method, consistently being the second-best in terms of average GMean, except for size (7, 40). The methods and have a nice connection to the single-objective knapsack problem <cit.>, as discussed earlier; this might explain why these heuristics reduce the number of intermediate solutions being generated in the BDDs during enumeration, helping the algorithm terminate quickly. They closely follow in terms of GMean metric with sizes having more than 3 objectives; however, they are almost twice as worst as on instances with 3 objectives. Interestingly, when the number of objectives is larger than 3, outperforms (and vice versa). This underscores the relationship between the number of objectives and heuristics, i.e., one heuristic might be preferred over another depending on the structure of the problem. This also highlights why the method performs better than the heuristics as the feature engineering helps create an ensemble of them rather than using only one of them. Lastly, we observe that method has the worst performance across all sizes. To complement <Ref>, we present box plots and performance profiles in <Ref> and <Ref>, respectively. Note how the enumeration time distribution for the method is much more concentrated, with a smaller median and only a few outliers compared to other methods. Note that the instances that had a timeout are omitted from this analysis. This performance improvement leads to a larger fraction of instances being solved in a smaller amount of time as highlighted in <Ref>. §.§.§ Q4. What explains 's performance? Traditionally, smaller-sized exact BDDs are sought after for efficiently performing tasks such as model checking or computing objective function bounds. Hoping to find a similar connection in the multiobjective setting, we analyze the relationship between the topology of BDDs generated by different methods and the time to compute the in <Ref>. Considering the fact that performs the worst among all methods and the impact of BDD size on the downstream task, we expect that the size of the BDDs generated by to be bigger. This holds true across different sizes and methods as it can be observed in <Ref> that Nodes and Width values are less than 100. Extrapolating, one would expect BDDs generated using orderings to be the smallest, as it achieves the best performance in terms of enumeration time. Counter-intuitively, that is not the case: generates the smallest-sized BDDs on average. For instance, the value of Nodes for size (3, 80) is 68.79% for , which is lower than 91.09% for . However, the reduction in size does not translate to improvements in the running time, as we already know that performs best in terms of time. We can decipher the performance gains of by studying the “Checks” metric. This metric can be thought of as a proxy of the work done by the enumeration algorithm; indeed, this metric is positively correlated with Time. This phenomenon can be further studied in <Ref>, which shows the mean cumulative solutions generated up to a given layer in the BDD. Clearly, has the least number of intermediate solutions generated, which also translates to fewer Pareto-dominance checks and smaller Checks. To summarize, smaller-sized BDDs can be efficient in enumerating the . However, reducing the BDD size beyond a certain threshold would inversely affect its performance as it leads to more intermediate solutions being generated, increasing the number of Pareto-dominance checks. This also validates the need for task-specific methods like that are specifically geared towards reducing the run time rather than optimizing a topological metric of the BDD. §.§.§ Q5. How interpretable are the decisions made by ? uses GBT with the pairwise ranking loss for learning to order variables. To obtain feature importance scores, we count the number of times a particular feature is used to split a node in the decision trees. We then normalize these scores by dividing them by the maximum score, resulting in values in [0, 1] for each size. <Ref> is a heatmap of feature importance scores for different sizes. We can note that the min-value-by-weight feature is important across all sizes, especially for cases with more than 3 objectives. In fact, the choice of heuristic was driven by feature importance scores and is a case in point of how learning-based methods can assist in designing good heuristics for optimization. Furthermore, the real-valued features are more important than the categorical rank features for problems with more than 3 objectives. For problems with 3 objectives, the std-value feature deviation is extremely crucial. Also, rank feature rk_max_avg-value-by-weight receives higher importance than some of the real-valued features. It is also interesting to observe that certain rank-based features are consistently ignored across all sizes. In a nutshell, the heatmap of feature importance scores helps in interpreting the features that govern the decisions made by , which also happens to be in alignment with a widely used heuristic to solve the single-objective knapsack problem. §.§.§ Q6. Can we train a single size-independent ML model? We answer in the affirmative: in fact, the methods ML+A and ML+AC in <Ref> refer to two models trained on the union of all training datasets of different sizes. As described in the section “Learning Models”, ML+A uses the same variable features as , whereas ML+AC adds the instance context features described in <Ref>. These two models perform remarkably well, sometimes even outperforming the size-specific  models that are being tested in-distribution. The main takeaway here is that our ML model architecture, which is independent of the number of variables in an instance, enables the training of unified models that are size-independent. In turn, these models perform very well. Combined with the simple features and types of ML models we have used, this finding points to a great potential for the use of ML in multiobjective BDD problems. The appendix includes feature importance plots for both of these models. ML+AC seems to make great use of the context features in its predictions, as expected. § CONCLUSION is the first machine learning framework for accelerating the BDD approach for multiobjective integer linear programming. We contribute a number of techniques and findings that may be of independent interest. For one, we have shown that variable ordering does impact the solution time of this approach. Our labeling method of using a black-box optimizer over a fixed-size parameter configuration space to discover high-quality variable orderings successfully bypasses the curse of dimensionality; the application of this approach to other similar labeling tasks may be possible, e.g., for backdoor set discovery in MIP <cit.>. An additional innovation is using size-independent ML models, i.e., models that do not depend on a fixed number of decision variables. This modeling choice enables the training of unified ML models, which our experiments reveal to perform very well. Through a comprehensive case study of the knapsack problem, we show that  can produce variable orderings that significantly reduce enumeration time. There are several exciting directions for future work that can build on : – The BDD approach to multiobjective integer programming has been applied to a few problems other than knapsack <cit.>. Much of  can be directly extended to such other problems, assuming that their BDD construction is significantly influenced by the variable ordering. One barrier to such an extension is the availability of open-source BDD construction code. As the use of BDDs in optimization is a rather nascent area of research, it is not uncommon to consider a case study of a single combinatorial problem, as was done for example for Graph Coloring in <cit.> and Maximum Independent Set in <cit.>. – Our method produces a static variable ordering upfront of BDD construction. While this was sufficient to improve on non-ML orderings for the knapsack problem, it may be interesting to consider dynamic variable orderings that observe the BDD construction process layer by layer and choose the next variable accordingly, as was done in <cit.>. – We have opted for rather interpretable ML model classes but the exploration of more sophisticated deep learning approaches may enable closing some of the remaining gap in training/validation loss, which may improve downstream solving performance. – Beyond the exact multiobjective setting, extending  to a heuristic that operates on a restricted BDD may provide an approximate much faster than full enumeration. We believe this to be an easy extension of our work. § APPENDIX §.§ Feature importance plots for size-independent ML models
http://arxiv.org/abs/2307.02012v1
20230705035358
A Survey Report on Hardware Trojan Detection by Multiple-Parameter Side-Channel Analysis
[ "Samir R Katte", "Keith E Fernandez" ]
cs.CR
[ "cs.CR", "cs.AR" ]
copyrightspace capbtabboxtable[][] noindlist * =0.35em =0em = 0em = 0.1em = 0em = 0em = 0em = 0.5in noinditemize definitionDefinition lemmaLemma theorem[lemma]Theorem A Survey Report on Hardware Trojan Detection by Multiple-Parameter Side-Channel Analysis Samir R Katte^1,Keith E Fernandez^1 UCLA Electrical Engineering, Los Angeles, CA 90095^1 =============================================================================================== § ABSTRACT 0.1in A major security threat to an integrated circuit (IC) design is the Hardware Trojan attack which is a malicious modification of the design. Previously several papers have investigated into side-channel analysis to detect the presence of Hardware Trojans. The side channel analysis were prescribed in these papers as an alternative to the conventional logic testing for detecting malicious modification in the design. It has been found that these conventional logic testing are ineffective when it comes to detecting small Trojans due to decrease in the sensitivity due to process variations encountered in the manufacturing techniques. The main paper <cit.> under consideration in this survey report focuses on proposing a new technique to detect Trojans by using multiple-parameter side-channel analysis. The novel idea will be explained thoroughly in this survey report. We also look into several other papers <cit.><cit.><cit.><cit.><cit.>, which talk about single parameter analysis and how they are implemented. We analyzed the short comings of those single parameter analysis techniques and we then show how this multi-parameter analysis technique is better. Finally we will talk about the combined side-channel analysis and logic testing approach in which there is higher detection coverage for hardware Trojan circuits of different types and sizes. Keywords: Hardware Trojans, multiple parameter side-channel analysis, process variation, logic testing. § INTRODUCTION 0.1in Hardware Trojans are the most recent security threats to integrated circuits (ICs) and it is necessary to check whether a given IC does not have any malicious modification i.e. a hardware Trojan is not inserted into it. One major cause of such attacks is due to outsourcing of the IC fabrication to foreign countries. The foundry if untrusted, it can insert a hardware Trojan in the existing design of the IC. The adversary should be intelligent enough to insert a Trojan which is undetected by conventional testing techniques but it gets activated when the IC is being used for its original operation. There are two ways to create such a Trojan. One way is to trigger its operation externally while the other way is to make the Trojan dependent on rare circuit conditions. The author of the main paper under consideration <cit.> uses some terminologies which are important to understand the analysis of the paper. The node affected by the Trojan is called the payload. The condition at which the Trojan is activated has been called as the trigger condition. This trigger condition can be purely combinational or sequentially related to the clock or a set of rare events. The malicious effects of Trojan payloads can be either a passive effect like revealing some secret information in a cryptographic IC or an active effect like entirely changing the functionality of the circuitry. There are two types of non-destructive Trojan detection techniques: logic testing and side-channel approaches. The conventional testing of an IC aims at the functional validation and it does not provide the high coverage for Trojan detection. So statistical logic testing has been suggested before which generates the structural tests to activate rare events and propagate the malicious effects to the primary outputs. This approach can be used when the size of the Trojan is ultra small i.e only of few gates in size. It becomes cumbersome if the kind of Trojan is complex sequential or if the number of Trojan instances is large. Apart from the Trojan detection technique mentioned above, there are several other physical side-channel parameters like the power signature which can be measured to detect the presence of hardware Trojans. Unlike in the statistical logic testing method, these approaches do not require us to trigger the Trojan and observe its impact at the primary output. But still there can be extreme variations in the measured side-channel parameters due to process variations. The present side channel approaches have certain shortcomings. Firstly with the increase in the process variations, the process calibration techniques and hence the Trojan detection sensitivity reduces. Secondly they consider only die-to-die process variation and ignore local within-die variations. Lastly they need design modifications which can be exploited by the adversary. Also as the size of the circuit increases and the size of the Trojan increases, it becomes tougher to detect Trojans because the detection sensitivity of the side-channel approaches decreases. The paper<cit.> describes a novel non-invasive multiple parameter side-channel analysis approach for effective detection of complex Trojans under large process-induced parameter variations. The method looks at the correlation of the intrinsic leakage (I_DDQ) to the maximum operating frequency (F_MAX), so that we can identify fast, intrinsically leaky ICs from the defective ones. This method does not only look at the power signature. It uses the dependencies between the transient supply current (I_DDT) and F_MAX of the circuit to find the Trojan infected ICs. There are several salient features of this paper as it touches upon this novel method. The technique described requires zero modification to the design flow and there is no hardware overhead. The major contribution of the paper is that it provides a theoretical analysis of the relationship between the multiple parameters and how it is used for reducing the process noise and for identifying Trojans. The FPGA based approach described provides both simulation verification and hardware validation. The impact of this paper is immense because it is the one of the first few papers to look into multiple parameter side-channel analysis, unlike previous papers which looked into single parameters only. Firstly it looks into a structural test-generation approach which minimizes the switching activity in different parts of the design and increases the activity of an arbitrary Trojan in the region under test. After that it proposes using power gating techniques to improve signal to noise by reducing the background current. It also introduces a third parameter called quiescent current or I_DDQ to improve the confidence of detection. It also looks into how to increase the detection sensitivity by proper choice of test conditions like operating voltage and frequency. Subsequently the paper also proposes the integration of the proposed side-channel approach and the statistical logic testing approach, which can detect Trojans of different types and sizes. The rest of this survey paper is organized as follows: Section 3 talks about the related work of this topic which basically talks about the work done in the papers apart from the main paper under survey. Section 4 explains the theory behind the operation of hardware Trojans and other technical terms. Section 5 compares the main paper with other papers and presents our findings. Section 6 gives an overview of the results obtained in the main paper. Finally Section 7 concludes the paper and section 8 is our critique of the entire set of papers. Also we list the references and the list of papers presented in the class by the authors in section 9 and section 10 respectively. § RELATED WORK 0.1in There have been several previous attempts to detect hardware Trojans in fabricated ICs. Most of these papers talk about a single parameter side-channel analysis to detect the presence of a malicious modification in the design. <cit.> investigates a power supply transient signal analysis method for detecting Trojans. It basically analyzes multiple power port signals. More precisely the paper<cit.> focuses on determining the smallest detectable Trojan in a set of process simulation models. Ten different layouts are being analyzed here. One of them is Trojan free and the others are inserted with a few gates to model a Trojan. Simulated models are extracted from the layers and the simulation data is analyzed to know when a Trojan can be detected. The results of the sensitivity analysis show that it is possible to reliably detect unactivated Trojans which are created using as few as four standard cell gates. <cit.> focuses on hardware Trojan detection using path delay fingerprint. The authors specially focus on the detection of explicit payload Trojan. This paper introduced this new category of hardware Trojans based on how the payload part of the Trojans works. The new category is divided into implicit Trojans and explicit Trojans. The explicit payload Trojan works under a typical two-phase manner: triggered and propagated payload. When the Trojan is triggered, the payload part will change the internal control signals or data signals and we result in the chip to perform erroneous or propagate secret information such as symmetrical keys to some output pins. This type of Trojan will insert extra delay in some paths passing those signals. On the other hand, implicit payload Trojans has a similar trigger part as the explicit payload Trojan but different payload working mechanics. The implicit payload Trojan does not compromise internal signals but only takes these signals as a stimulus of the trigger. After the Trojan is triggered, the implicit payload part will behave in a different way than it does in explicit Trojan. The implicit Trojan can leak secret information by emitting radio waves or may destroy the entire chip. The signal to trigger the implicit payload Trojan has a larger capacity load and hence consumes more power and it also makes some path delays larger. But if we compared the path delay of the explicit payload Trojan, the added delay by the implicit payload Trojan is smaller and much harder to detect. Another way to distinguish between these two types of Trojans is that an explicit payload Trojan can be detected using exhaustive traditional functional tests unlike implicit payload Trojan. <cit.> had experimental results which showed the detection rate of explicit payload Trojans to be 100%. The method described by the authors should be developed further to be used for implicit payload Trojan detection. <cit.> talks about single parameter side-channel analysis wherein it uses the transient power analysis of the IC for Trojan detection. The paper proposes a non-destructive approach which characterizes and compares transient power signature using principle component analysis. The approach is validated with hardware measurement results. The test setup is FPGA based and this approach can discover small (<1.1% area) Trojans under large noise and variation. <cit.> discusses a technique for precisely measuring the combinational delay of an arbitrarily large number of register-to-register paths internal to the functional portion of the IC. It is suggested that this technique can be used to provide the desired authentication and design alteration detection. This technique is low cost and it does not affect the main IC functionality and can be performed at-speed at both the test-time and run time. Understanding the working of a Physical Unclonable Functions (PUF) is important for completely understanding the contents of <cit.>. The theory of PUF is covered at the end of the next section. In <cit.>, the authors suggested the technique which can be used to generate longer signatures than the existing PUF designs. Also unlike other techniques that are applied to non-functional paths specifically inserted to be used as a PUF, this technique can be performed on the functional paths of the core circuit without affecting timing and functionality. Thus it makes this technique significantly more difficult for an attacker to bypass, remove, or spoof the signature extraction unit. A novel on-chip structure including a ring oscillator network (RON), distributed across the entire chip, is proposed in <cit.> to verify whether the chip is Trojan-free or not. This structure eliminates the problem of measurement noise and localizes the measurement of dynamic power. The authors have presented the simulation results for Trojans inserted into 90nm technology circuits. There are also some experimental results present in the paper which demonstrates the efficiency and the scalability of the RON architecture for Trojan detection. The next section talks about the theory required to understand the analysis of the papers. § THEORY 0.1in Hardware Trojans are classified based on the method of activation and the effect it has on the circuit functionality. If the Trojan is combinationally triggered, for example an A=B condition occurs at the trigger inputs of the Trojan which changes the expected output at the node from ER to an incorrect value ER*. The adversary chooses a very rare condition for the Trojan activation so that it is not triggered during conventional manufacturing tests. Sequentially triggered Trojans are activated by occurrence of a sequence of rare events or after a period of continuous operation. An example of sequentially triggered Trojans is that an asynchronous k-bit counter activates a Trojan when the count reaches 2^k-1 and hence modifies the node ER to an incorrect value of ER*. Another type of Trojan is one which consists of a linear feedback shift register. It is used to leak the secret key in cryptographic hardware by helping in side-channel attacks. It is difficult to have a single Trojan detection mechanism because of the variety of Trojans that need to be detected. Destructive testing of a chip is expensive due to depackaging, de-metallization and micro-photography implemented in the whole process. Also this technique is not feasible because the attacker can insert the Trojan into a small subset of the manufactured IC. So non-destructive methods are used. The logic based testing aims at triggering rare events at the internal nodes of the circuit to activate Trojans and then observe the outputs. Meanwhile in side-channel analysis-based Trojan detection, we observe the effect of Trojan insertion on the parameters like circuit transient current, leakage current, etc. If the values are beyond a particular threshold value, we can say that a Trojan is detected. But both the Trojan detection techniques have their own pros and cons. The main problem in logic testing is the extremely large Trojan design space and hence the complete enumeration and the test generation are infeasible in computational terms. Side-channel analysis is advantageous because it does not look into the malfunction of the circuit but looks into the changes in the side-channel parameter. Problems in side-channel analysis include large process-induced parameter variations and measurement noise which can mask the effect of insertion of small Trojans. But none of the previous works suggest techniques which eliminates the local and the large process variations. <cit.> talks about Physical Unclonable Functions (PUF). So we will explain PUF briefly so that it will be easier to understand the technique proposed by the author. PUFs are functions that map a set of challenges to responses that are generated from, and hence reflect, the unique physical characteristics of each device. Therefore PUFs can provide higher security than other soft-key-based cryptographies because they extract the security information from the physical system itself, rather than storing this information in non-volatile memory. The working of a PUF is explained as followed: An n-bit challenge vector (say C) is input to an n-stage string of switch blocks as the control vector i.e. one bit for each switch block. Each switch block has two inputs, two outputs and a control bit. Depending on the value of the control bit, the inputs of the switch block will either go directly to the outputs or will be exchanged. Thus the circuit can create a pair of delay paths for each input vector. For evaluation, a rising edge is input to stimulate both of the circuit paths. The signal passes through these two paths and one of the outputs of the last switch block is used as the response to that challenge vector. The known output to the challenge is compared to the response for authentication. § COMPARISON WITH OTHERS PAPERS 0.1in The main issue with single side channel techniques is that they are prone to large process variations and it has a large susceptibility to process noise. Most Side Channel Techniques described in papers <cit.>, <cit.>, <cit.>, <cit.> and <cit.> could only deal with one of the two problems. But as scaling increases, these problems are more prominent at lower design nodes. A case may arise where a single parameter signature of a Trojan-Infested IC falls within the threshold of a Genuine IC. This paper <cit.> deals with a unique multiple side parameter method which models process variation as well as process noise, and thus increases the probability of finding a Trojan within a circuit. We also compare the method described in <cit.> with the single parameter methods listed in papers <cit.>, <cit.>, <cit.>, <cit.> and <cit.>. 5.1 Methodology Every large integrated circuit has millions of transistors, and every transistor consumes some amount of leakage power when it is off, and also it contributes to the dynamic power consumption when it is on. This means that even a Trojan circuit when inserted in large IC will contribute to the overall static and dynamic power consumption. There are advantages of using Power analysis as a means to identify Trojans 1) It does not require the Trojan to be fully on to detect it, and 2) It is non-invasive, meaning no extra circuitry is required to detect the Trojan. The technique in this paper uses a multiple parameter power analysis method to detect Trojans. The technique in <cit.> is compared with a transient power technique in <cit.>, timing related techniques in <cit.>,<cit.> and <cit.> and transient current method given in <cit.>. However such non-invasive power techniques have a few challenges. 1) If the Trojan circuit is less than 0.1% of the total circuit size it will cause a negligible change in the supply current and power consumption. 2) The impact of process variation on leakage current at lower design node can cause such a large variation that even a Trojan IC could emit a signature of a non-Trojan IC. 5.2 Multiple Parameter Trojan Detection In relation to <cit.> the main issue with analyzing single parameters like I_DDT and F_MAX is process noise and process variation. Process variation can mask the effect of a Trojan and make it look like a genuine part of Circuit. If we plot the spread of I_DDT with taking process variation into consideration, (Figure 1) we see that the Trojan is easily masked making it difficult to decipher. The Trojan in this example is an 8 bit counter and it has a very pronounced effect on I_DDT. This is a very serious issue because if a Trojan of a much smaller size is inserted into the circuit process variation will make it even more difficult to find the inclusion of a Trojan. If we take process noise into account the issue is even more serious, because process noise and process variation both help in the masking of the Trojan. < g r a p h i c s > Figure 1: Average I_DDT versus Process Corners For this reason the inherent relationship between I_DDT and F_MAX is considered to form a multi parameter model to find the inclusion of Trojans. Consider Figure 2. In this figure Chip i and Chip j both have the same I_DDT value but both chips are different because of the inclusion of the Trojan. But if you even more closely, both chips have a different F_MAX value. This means both parameters contribute in finding a Trojan. Consider Chip i and Chip k, you can easily see that they have the same value of I_DDT. Under single parameter analysis both these chips would be considered one and the same. But with considering F_MAX as well we can the difference between a genuine and a Trojan chip. < g r a p h i c s > Figure 2: Average I_DDT versus F_MAX This was not possible with only one parameter analysis. Process variation and process noise can easily cause a lot of Chips which contain Trojans seem genuine(false positive) and a lot a genuine chips seem like infected chips(false negative). The intrinsic relation between I_DDT and F_MAX can be used to distinguish between chips under process noise. Now in the proposed approach F_MAX is used to calibrating the test corners of the chip. It is calculated by taking the path of any one path within the chip. This makes it even more difficult for the adversary to insert a Trojan. A standard IC has more than 1,00,000 paths. It is very difficult for the adversary to guess exactly which path was used in calculating F_MAX. If the attacker even is able to guess which path was used to calibrate F_MAX and introduces a Trojan in that path, it will cause a shift in the I_DDT vs F_MAX curve and the Trojan will still be detected. The only was an attacker can bypass the multi-parameter approach and alter F_MAX is he/she can estimate the exact process variation, which is next to impossible to calibrate before the fabrication of the chip. Another feature of the method in <cit.> is to increase the sensitivity of finding a Trojan with taking process variation into account. The expression for sensitivity is given by Sensitivity = (I_tampered-I_original)/I_original * 100 From the expression we can see that, to increase sensitivity we have reduce I_original. 5.3 Test Vector Generation and Power Gating In this approach we use Graph partition technique to divide the IC into different blocks and apply test vectors to each of these blocks independently. The blocks should be large enough to weed out process noise and should be small enough to eliminate background noise. Each partition should be independent because the test vectors are applied to each block individually. The MERO statistical approach is used in applying the test vectors. The test vectors are selected to increase the switching of the Trojan and hence increase I_tampered. To improve I_original, a power gating mechanism is used. The power gating mechanism is used during test time to decrease the current from any other block. It is supplementary to the test vectors which are used in increasing the switching. We should remember that turning off all the remaining modules is not possible because some modules maybe interdependent. This would marginally increase I_original but it still gives us much better sensitivity than without doing power gating. Hence using this technique we have a higher chance of finding a Trojan of a very small feature size. The entire procedure can be summarized in Figure 3 < g r a p h i c s > Figure 3: Major Steps in Trojan Detection using Multiple Parameters 5.4 Other Side Parameters We should note that other than I_DDT and F_MAX other parameters can also be used to characterize Trojans. Generally one parameter is affected by the Trojan in this case I_DDT and the other parameter is used to characterize process noise(F_MAX). I_DDT is the transient current of the circuit. Another component of the current is leakage current I_DDQ.Every transistor contributes to leakage. Hence even the Trojans are off they can contribute to leakage. Hence we have a relationship even with I_DDQ and F_MAX. I_DDQ monotonically increases with F_MAX.It shows the same behavior as I_DDT vs F_MAX .Hence any relation of I_DDT with F_MAX can be applied to I_DDQ with F_MAX. Also I_DDT and I_DDQ are both input dependent. See Figure 4. < g r a p h i c s > Figure 4: The correlation among I_DDT , I_DDQ, and F_MAX 5.5 Combination With Logic Testing Side Channel Parameter testing may not be able to detect Ultra small Trojans due to various process variations and process noise, but are non-invasive. On the other hand Logic Testing techniques can detect Trojans with utmost confidence even with worst case process variation taken into account. But Logic Testing has a bad test coverage because most Trojans may have a certain input vector to trigger it. Hence it is inefficient as well. The authors in this paper <cit.> suggest a method in which we can club the Multiple parameter analysis with Logic Testing and hence merging the best of both methods. Since Side Channel Analysis doesn’t require the Trojan to be fully on and Logic Testing being immune to process variation we have a better chance in detecting a Trojan of lesser than 0.1% size of the circuit. From the above lines we can draw a good conclusion to the method given in this paper. It gives us a brief relation between side channel parameters which can be used to find Trojans in a circuit. Many parameters other than the ones in this paper can be used. Moreover the Power gating and Test Vector generation methods can easily increase the sensitivity of the Trojan current. The consideration of process variation and noise is a good example of how scaling can help Trojans hide and multiple parameter analysis can find Trojans even with these variations taken into account. Let us now compare the Multiple parameter analysis with a few single parameter analysis given in papers <cit.> and <cit.> and timing parameters in <cit.>,<cit.> and <cit.> see how it fares with these papers. 5.6 Malicious Circuitry Detection using Transient Power Analysis for IC Security The paper in <cit.> deals with a Single Side Channel Parameter method for Detecting a Hardware Trojan. The authors have proposed a non-invasive Transient Power Scheme to detect the presence of a Trojan. They have also considered the effect of process noise and variations. The paper takes a statistical approach to find the presence of a Trojan. All integrated circuits consume power. Power has to components Static and Dynamic power. When Switching occurs a transistor consumes dynamic power and in steady state operation it consumes static power. The same way, if a Trojan circuit is inserted , when it is a non-activated state it contributes to leakage(static) energy and when an input vector activates it consumes dynamic power. P = P_dynamic + P_static = (0.5*C*V_dd ^2 + Q_se*V_dd)*f*N + V_dd*I_leak Taking all external factors into account we can summarize that power consumed by a chip is given by Power consumed = Dynamic Power + Leakage Power + Measurement noise + Trojan Power Trojan Power is only added to the power signature in presence of a Trojan. This is the way the power signature of an IC is calculated. The paper use the PCA (Principal Component Analysis) method to help in the Trojan Detection Scheme. The steps of the detection can be summarized as follows: 1) From a set of IC’s, select a few and Run Logical Tests on them to find their Power Signature. 2) Calculate the mean power of the sample set and subtract it from all the power traces to give the sample set a zero mean. 3) Find the Covariance matrix S of the population and perform Eigen vector decomposition on it and find its Eigen values and Eigen vectors. 4) Chose the top m Eigen vectors to form the transformation matrix A. 5) Use PCA to project the power traces of the genuine IC’s on the subspace of the Eigen Vectors. Now for every untrusted IC we use PCA to generate its projections and project it on the Eigen vector subspace. Comparing the 2 spreads we can figure out if the IC is genuine or infected. The paper is a very good example of involving statistical methods in the detection procedure. It involves a fast method of Trojan Detection for a big IC population. The Eigen vector projections are immune to process noise. It would work for a small process variation. The issue with this method is that since it’s a single parameter method it very immune to large process variations. The method could cause a lot of false positives or false negatives. Also there is a possibility that the Trojan won’t be activated at all. Hence the Trojan current would be much lesser than the IC circuit current. Hence the sensitivity in finding a Trojan is reduced. This would impact the Trojan detection procedure if the Trojan size is around 0.5 % of the circuit size or lesser. The Multi-Parameter method in <cit.> Addresses these issues with 2 parameters in which 1 is affected by process noise and the other parameter is affected by process variation. 5.7 Hardware Trojan Detection Using Path Delay Fingerprint So far we have seen papers which deal with Power Signatures to detect Trojans. The paper in <cit.> discusses a method to use a Path Delay to detect a Trojan in a circuit. The paper talks about implicit and explicit payload Trojans. A Trojan when activated in a circuit can change the delay timing of the paths within the circuit. This delay can be affected by process variation, but with proper filtering can be used as method of Trojan detection. The method discussed in this paper can effectively detect Explicit Payload Trojans. Explicit payload Trojans can alter internal signals of an IC. Hence they add to the delay of the signals. Implicit Payload Trojans do not affect any signal in the circuit. They use the existing signals to generate their own output signals. Hence they do not add to the delay. The basic procedure can be described in 3 steps, which are as follows: Path Delay Gathering from a nominal chip: For this step, High coverage test patterns are given at the input and the path delay information is extracted. Process variation can play a big part in deep-submicron technology and can vary as much as 5%. In this paper the authors vary the delay parameters of the SMIC technology library by 7.5% in both directions. A Logic Synthesis software is then used to compile the test circuit under consideration to give its gate level netlist without a Trojan. Trojans are then inserted into the netlist. The modified netlist is then passed through a Static Timing Analysis tool to generated the SDF(Standard Delay Formats) Files for every circuit under consideration. The SDF contains the path delay for all paths in the Test Circuit. Delay Testing: Test Patterns should be conclusive to find out all the characteristics of a Circuit under test. Hence a lot of Test Patterns should be generated for this case. The paper mentions using a ATPG tool which analyses any input netlist and generates the set of input patterns. The input patterns are then applied to the chip to get its corresponding output vectors. The numbers of path delays for any chip under consideration can be roughly calculated as the number of inputs*number of outputs, which is a very big test vector. Sample Analysis and Trojan Detection: For detecting a Trojan, analysis is done of the entire data set. Since the data set has a lot of dimension it has to be reduced before we can actually deal with the data. A PCA method can be used to reduce the number of dimensions. However a PCA analysis can cause a loss of important data in the dataset. Hence the data is first divided into smaller sets and then a PCA is done to reduce the dimensionality. For detecting the Trojans a Quickhull algorithm is used to construct a convex hull of a set of points. A convex hull is the smallest convex set which contains all the points. This can be used in Trojan Detection. The PCA causes a reduction in dimension and the by using diving the test vector into smaller units we can use these sub units to construct their corresponding convex hulls (Figure 5). < g r a p h i c s > Figure 5: Convex Hull of Test Vector Space The number of dimensions of the convex hull is reduced to 3 to give a uniform three-dimensional space to plot the points. From Convex Hull theory we can come to the conclusion that the points which nearer to the surface of the Convex Hull are genuine and the ones which are further from the convex hull are termed as Trojans. The authors of this paper have tested this method with various Trojans including both Explicit and Implicit Payload Trojans and moderate to worst process variation corners. The Trojans implemented by them range from 2% to 0.76% of the total chip area. They have reported a 100% hit rate for Explicit Trojans and a 36% hit rate for the Implicit variety. The paper has made a good attempt to include worst case process variation into account while doing analysis. The authors have also conducted tests in which they have introduced the Trojans in various parts of the circuit and have achieved good results. The only issue with this method is the ineffectiveness to find Implicit Payload Trojans which could be used to Transmit Information but do not affect timing statistics of the chip. The Multi –Parameter method in <cit.> should be able to find both Implicit and Explicit Payload Trojans since the type of Trojan does not matter in its method of detection. 5.7 At-Speed Delay Characterization for IC Authentication and Trojan Horse Detection The paper in<cit.> deals with a Run-Time method to predict the presence of a Hardware Trojan. It uses timing information of the combinational paths to find out if a Trojan is present or not. Currently many existing techniques like Physically Unclonable Functions (PUF) circuits are used to authenticate IC’s. PUF’s have challenge response architecture to Authenticate IC’s. This normally consists of Interconnected Flip-Flops which accept an n-bit challenge vector. The number of Flips-Flops are synonymous to the number of test Inputs. For each n-bit input there is 1 bit output which is very inefficient and can be re-engineered by the attacker with some effort. There is also an Area overhead and Speed issue with the number of test patterns increasing. The paper introduces another run-time method to predict the presence of a Trojan using a Timing Signature. The method uses a Delay Measurement method At-speed which does not affect the circuit functionality. The basic motivation for this paper is that, when a Trojan is activated the Delay Characteristics of circuit change. While PUF’s deal with non-functional parts of the circuit, the technique in this paper deals with functional parts of the circuit as well as the non-functional parts. The Path Delay Characterization method uses negatively skewed registers which are placed in the combinational path within the circuit. The architecture consists of the Main Circuit – one register to register combinational path delay, clk1 – which is the main clock in the system, Shadow Register – a negatively skewed register and the Destination Register – The final register in the combinational path. The Shadow register takes the same input as the Destination register. The Shadow register is clocked by clk2 which is negatively skewed with respected to clk1.This means the Shadow Register receives the input before the Destination Register. The contents of both the registers are compared after every clock cycle. If there any discrepancy, there is a 1 bit Result bit which is set to 1 when there is a difference. The clk1 remains constant but the skew in clk2 is changed in steps and the results in both the Registers are compared for various values of skew. If the results remain consistent the Result bit remains 1. Now no other Register input can change the value of the Result bit after it is set to 1.The results of the various registers are then read through a scan chain. The presence of a 1 in the chain confirms the presence of the Trojan. If we back annotate to the Destination register which was set to 1 we can predict the possible location of the Trojan. The authors implemented this scheme of Virtex-II FPGA. The FPGA’s DCM (Digital Clock Manager) was used to change the skew of the clock by providing a phase shift in the clock. The total time for one delay measurement process is given by s*(c+p) , where, s – number of skew steps, c – number of clock cycles in each skew steps and p – number of paths the delay is measured. This means the delay measurements will be performed every s*(c+p) cycles. The method also incorporates an on-chip Ring Oscillator which is used a Temperature Compensation Network. The Delay of the registers can change due to temperature effects. This mechanism makes sure that temperature irregularities can be compensated. The verification mechanism consists of Authentication Control which generates the test inputs. A signature is also present on-chip which gives the desired response to the given test vector. The response is calibrated by calculating the path delay of each of the individual paths of the circuit. Since this an inherent circuit feature it can be regarded as the timing signature of the circuit under test. The delay is calculated in the following manner- The skew of clk2 is increased in a number of steps till a 1 is generated. The skew which generates the 1 is then subtracted from clk1 to give the delay of the circuit. The paper gives a good account of how Trojan Detection can be done at runtime. It only includes minimum hardware of 1 register per path which is not a very big area overhead. It can help the IC authenticator to detect the presence of a Trojan after the chip is received from the foundry. The technique is non-invasive and the only changes done to the Design are done before it is sent to the foundry. The paper has a few cons primarily delaying with skew and process variation. It is very difficult to have control of on chip using a clock source without taking into account power fluctuations and other secondary effects. Also having a dual clock in a circuit is unfeasible because of various clocking issues. The main worry is if the adversary understands the design in the foundry and selectively disables the Shadow Register mechanism. They can even make changes so that an error is never reported. Our primary paper in <cit.> takes process variation into account and also deals with a single clock. It is also immune to jitter and other factors and also doesn’t add any design overhead. 5.7 RON - An On Chip Oscillator Network for Hardware Trojan Detection A Ring Oscillator is a circuit which has an odd number of gates and is used to set the frequency of a circuit. It can be a set of inverters or a set of any combinational gates. The authors of this paper <cit.> suggest the usage of such a structure in helping find a Trojan. The paper mentions that this structure effectively eliminates the issue of measurement noise, helps measuring dynamic power, and additionally compensates for the impact of process variations. Using statistical methods the detection of Trojans is possible. The principle of this technique is that power signature of an Infected IC will be different from a Genuine one. A Ring Oscillator Network has the ability to detect power fluctuations. A number of RON can be placed around the circuit which can be used as power monitors. The frequency of this oscillator is given by the delay of the all the gates (inverters) in the chain. For a n stage ring oscillator the delay is 2*n*td and the frequency is given as 1/(2*n*td). The delays of the inverters are impacted by various process variations as well as Power Supply Variation. When the voltage across the chip drops the delay of the inverter increases. If a Trojan is inserted in an IC, the switching gates in the Trojan would cause a small voltage drop on the power lines. The power supply noise for Trojan free and Trojan infected IC’s would be different. Using this fact in mind a Trojan IC can be detected. Since one RON will be insufficient for a chip, multiple Ring Oscillator Networks are inserted all around the chip to capture the power supply and noise effects. The outputs of the Ring Oscillator Network can be used as a power signature to distinguish between free and infected IC’s. The Oscillation count from RON is used in generating the power signature. The formula for Oscillation count is given as follows: C_i= ∫_0^T 1/2*n*t_di(t) dt. Using the difference of cycle count between the genuine IC and Trojan IC the distance between and genuine and infected IC is generated. A circuit with multiple RON’s uses a multiplexer to select the RON to do authentication and another multiplexer selects another RON to do the recording. To deal with process and other variations statistical methods such as Single Outlier analysis and Principal component Analysis are used. If Oscillation Cycles of a RON lie within a certain threshold the IC is determined as genuine. Since the number of Ring Oscillators used within the IC are large in number, with many Oscillation counts, the PCA method is used to reduce the number of dimensions to deal with. A Convex Hull is then constructed with the top 3 components. If output of the RON lies beyond the Hull, the IC has failed authentication. IC’s which have passed authentication are further analyzed using Advanced Outlier Analysis. For experimental analysis the authors tested 10 different Trojans whose size ranges from 0.36-0.9% of the circuit size. The average Trojan detection rate was 80-90%. From our analysis of the paper we can say that the paper describes a very good way inclusive of process variation to detect Trojans. Also in layout the Trojan has a distributed nature which makes it difficult to disable. Another fact is that multiple RON networks give a good coverage of the IC layout. The only Cons that we could think of the effect of jitter because not all the RON’s would suffer from the same amount of jitter due to their distributed placement. Also multiple RON’s can have unnecessary area overhead if IC’s are of medium size. The multi-parameter technique in <cit.> does not add any area overhead and also take process variation into account. Even the effect of skew and jitter are effectively modeled in this procedure. 5.8 Sensitivity Analysis to Hardware Trojans Using Power Supply Transient Signals The paper in <cit.> uses Transient Power as an Electrical Signature to detect Trojans. Transient Power is generated by Transient Current which is caused by the switching of gates. The analysis is done at multiple port to improve the resolution of Trojan Detection. There could be an attacker who could design a Trojan in such a manner that it has minimal impact on I_DDT and I_DDQ. Also background noise can help in masking the whereabouts of the Trojan. This method is a statistical method and needs a simulation model. It analyses an IC’s supply current measured from multiple supply ports to deal with small Trojan-signal-to-background-current ratios. A simple calibration procedure is used to reduce the impact of process variations and background noise. Simulation models are extracted from a layout under test and the data is analyzed to detect the possibility of a Trojan. For experimental analysis, Trojans were manually inserted in the interstitial places in the layout. The Trojan is a special set of gates which has a constant value on one of its inputs and the second input is derived from the node in the layout. The I_DDT signals are measured from various ports on the chip. A test sequence is applied to cause switching in the gates. The Calibration procedure deals with mitigating the effect of process variation which occurs on the chip core, power grid and off chip connections. A p channel transistor with its gate connected to scan FF and its source connected to power port of the chip is used in the calibration. The scan FF generates a step input which triggers the p transistor and causes a short between V_DD and Gnd. The impulse response are generated from the step matrices and inserted in a calibration matrix. Using the calibration matrices from the various ports a Transformation matrix is generated. Trojan Detection Analysis: A scatter plot is used for statistical analysis. Two separate test sequences are applied and I_DDT is measured from the various ports. Calibration is conducted to reduce process variation. The calibrated results are then plotted on scatter plots for analysis. The mean and variance of the scatterplot is used to determine the statistical limits of the enclosing ellipse. The enclosing ellipse defines a set of points which a Trojan Free to reside within the ellipse and which are infected to reside outside it. If any point lies outside an Outlier Analysis is performed to ascertain if a Trojan is present or not. The paper uses a calibration method to remove the effect of process variation which is very important with increased scaling. As the scatter plot analysis method help in fast yet accurate method in analyzing current and power traces. The technique could be extended to other side channel parameters like timing or leakage current. The only issue is that the equipment used in the calibration procedure itself could be immune to process variation which could lead the mitigation process having some residues. The multi-channel parameter technique in <cit.> mitigates this calibration issue by having 2 parameters, in which one is affected by process variation only and the other is affected by process noise only, which has a better tendency to reduce process variation. § RESULTS 0.1in Table 1 shows the Trojan Coverage using only MERO and using both a combination of MERO and the conventional multi side channel method. We see that for the given benchmarks an average of 99.98% of the Trojans were detected. The size of the Trojans varied from 2 to 16 inputs. Hence an ultra-small to small Trojans have a good detection accuracy using the combination of MERO and the conventional multiple parameter side channel method for Trojan insertion. < g r a p h i c s > Table 1: Trojan Coverage for ISCAS-85 Benchmark Circuits § CONCLUSION 0.1in For the purpose of this survey we have reviewed six papers related to Hardware Trojans. The basic issue which was understood during the analysis was that process variation and process noise play an important role in hindering the search of Trojan in an IC. Most Single Parameter methods can only take care of either noise or variation, but not both. Hence we see that they suffer from a limited accuracy for ultra-small Trojans which are about 0.02% of the entire circuit size. The main paper <cit.> which we reviewed had a very novel approach which two side channel parameters into account for the Trojan Detection. We see not only does this increase the detection resolution but also reduces the complexity of the test vector space. With the combination of Logic Testing and Multi0Side Channel analysis the detection rate can be increased further. § CRITIQUE 0.1in Pros * The paper revealed a very novel idea using the combination of 2 interdependent parameters. * The integration with Logic Testing is good way to club 2 different Trojan detection methods. * Selective Test Vector generation and Power Gating not only increases detection sensitivity but also reduces the complexity of test vectors to activate the Trojans. * Overall the paper was a very good read and used a simplified language and always kept the reader engaged. * Process Variation was covered in great detail in this paper. Cons * The power gating mechanism actually adds a big area overhead and also causes a modification in the original circuit which requires specialized tools for the implementation. * The integration with Logic Testing will increase the testing time by 2-3 times which should have been mentioned in the paper. 100 0.1in 1 S. Narasimhan, D. Du, R. S. Chakraborty, S. Paul, F. G. Wolff, C. A. Papachristou, K. Roy and S. Bhunia. Hardware Trojan Detection by Multiple-Parameter Side-Channel Analysis. In IEEE Transactions on Computers Vol.62 No.11 November 2013. 2 R. Rad, J. Plusquellic and M. Tehranipoor, “Sensitivity Analysis to Hardware Trojans using Power Supply Transient Signals”. 3 Y. Jin and Y. Makris, “Hardware Trojan Detection Using Path Delay Fingerprint,” Proc. IEEE Int’l Workshop Hardware-Oriented Security and Trust, pp. 51-57, 2008. 4 L. Wang, H. Xie and H. Luo, "Malicious Circuitry Detection Using Transient Power Analysis for IC Security", QR2MSE 2013. 5 J. Li and J. Lach, "At-Speed Delay Characterization for IC Authentication and Trojan Horse Detection" 6 X. Zhang and M. Tehranipoor, "RON: An On-Chip Ring Oscillator Network for Hardware Trojan Detection", EDAA 2011 abbrv
http://arxiv.org/abs/2307.00759v1
20230703052938
Multilingual Contextual Adapters To Improve Custom Word Recognition In Low-resource Languages
[ "Devang Kulshreshtha", "Saket Dingliwal", "Brady Houston", "Sravan Bodapati" ]
cs.CL
[ "cs.CL", "cs.SD", "eess.AS" ]
Learning Noise-Resistant Image Representation by Aligning Clean and Noisy Domains Fangzhou Luo August 1, 2023 ================================================================================= *Equal contributionfootnote Connectionist Temporal Classification (CTC) models are popular for their balance between speed and performance for Automatic Speech Recognition (ASR). However, these CTC models still struggle in other areas, such as personalization towards custom words. A recent approach explores Contextual Adapters, wherein an attention-based biasing model for CTC is used to improve the recognition of custom entities. While this approach works well with enough data, we showcase that it isn't an effective strategy for low-resource languages. In this work, we propose a supervision loss for smoother training of the Contextual Adapters. Further, we explore a multilingual strategy to improve performance with limited training data. Our method achieves 48% F1 improvement in retrieving unseen custom entities for a low-resource language. Interestingly, as a by-product of training the Contextual Adapters, we see a 5-11% Word Error Rate (WER) reduction in the performance of the base CTC model as well. Index Terms: CTC, multilingual, personalization, low-resource ASR, OOVs, entity recognition § INTRODUCTION End-to-End (E2E) models still stubbornly lag their hybrid predecessors in custom entity recognition <cit.>, where a model is presented with a list of custom words and is expected to better recognize them. Hybrid models’ modularity and explicit frame-wise phonetic targets are naturally amenable to various boosting approaches used for this task. However, recent work addresses this problem for E2E models by exploring various methods to boost custom words, such as LM fusion <cit.>, multi-modal training <cit.> or modification of the E2E model itself <cit.>. CTC-based E2E models <cit.>, in particular, are a challenge for current boosting methods because of its time-synchronous and conditionally independent outputs. Recently, a two-phase boosting approach for CTC explores improving its customization <cit.>. In this work, Contextual Adapters are trained to attend to custom entities using representations from different layers of an already trained acoustic to subword encoder and copy them to the output when they were present in speech. Together with an on-the-fly boosting approach during decoding, this approach resulted in improved recognition of out-of-vocabulary words (OOVs) on English VoxPopuli dataset by 60% (F1-score) with minimal impact on overall WER. However, the dependency of Contextual Adapters’ ability to learn to boost patterns on data volume remains unclear. We identify that this ability significantly reduces for low-resource languages. We attribute our finding to three main reasons: (1) the training objective for the Contextual Adapters lacks direct guidance and is hard to optimize, specially in a low-data setting (2) the CTC encoder trained using limited data produces low-quality audio representations (3) low-resource languages may not have enough diversity in the training data for the Contextual Adapters to copy and boost from the weak encoder outputs. In this work, we systematically explore different strategies to mitigate the above issues. First, we propose a novel multitask cross-entropy loss for training the Contextual Adapters, which provides additional supervision to copy the right entity word whenever required. Second, we leverage multilingual training of E2E models, which has been shown to significantly improve the performance for low-resource languages in terms of overall WER <cit.>. Lastly, we expand multilingual training to include Contextual Adapters to bootstrap the learning of copying and boosting mechanism for a low-resource language. Our proposed training strategy results in models that exhibit robust overall performance and can be customized to any list of entities. Our model, that uses just 100 hours of Portuguese language for training, achieves a 48% absolute improvement of F1 scores in the retrieval of custom words over monolingual training of Contextual Adapters. Interestingly, we find that fine-tuning the multilingual encoder and multilingual Contextual Adapters together on monolingual data reduces WER by 5-11% over fine-tuning the multilingual encoder alone for low-resource languages. § BACKGROUND Contextual Adapters for CTC Encoder: As showcased in the Figure <ref>, a CTC encoder (parameterized by θ_ctc) takes in an audio as input, passes it through multiple Conformer <cit.> blocks to produce a sequence of word piece posteriors, which is then optimized using the CTC loss (ℒ_CTC) <cit.>. One of the methods to boost a list of custom words includes training a separate attention-based module called Contextual Adapters (parameterized by θ_CA) by freezing θ_ctc and optimizing ℒ_CTC on a subset (D^boost) of the training data that contains at least one entity/rare term in the transcript. A training sample d = (x,y,W^boost) for the Contextual Adapters contains an audio x with T frames, transcript y and a list of K random entity words W^boost. Let w^boost be an entity word present in y, then w^boost∈ W^boost. As highlighted in the Figure <ref>, the Contextual Adapters is comprised of two main components: 1) an LSTM catalog encoder, which encodes the sub-word tokens comprising each entity term in the list W^boost and 2) an attention-based biasing adapter, which attends over the union of each word in the list W^boost and a special ⟨ nb ⟩ (no-bias) token to learn a context vector. This context vector is added to the final encoder layer output of the base CTC model and the sum is then passed into the softmax layer. The attention focuses on the ⟨ nb ⟩ token for the time frames which do not correspond to any word in the list, while it attends to the sub-words corresponding to w^boost for the frames in which it is spoken. After the module identifies the right word to boost, the probability of the subword sequence corresponding to that word is increased in the output. This learned `copying` mechanism can then be used to personalize an ASR system to a custom entity list during inference. Multilingual End-to-end Model Training: Multilingual modeling has seen a resurgence in parallel with E2E modeling. As E2E models do not require explicit alignments nor phonetic targets, the data from individual languages can be pooled together using a common sub-word vocabulary <cit.>. When used in low-resource language settings, this simple approach yields impressive improvements over monolingual models <cit.>. These pooled models can be further fine-tuned on individual languages to improve performance <cit.>. § METHODOLOGY §.§ Supervision Loss for Contextual Adapters Prior work used various approaches for training attention-based modules for personalization, such as a curriculum learning-based approach that slowly increases the size of the entity word list i.e. K during training <cit.>, adding a special suffix to each entity word in the ground truth <cit.>, and using separate labeled data for training the Contextual Adapters and the CTC model <cit.>. However, they optimize ℒ_ctc for training the attention module’s parameters. This loss does not explicitly capture the problem of choosing the right entity word and boosting the corresponding sub-word sequence. In this work, we introduce an adapter-specific loss function to train the Contextual Adapters, which is added to the CTC loss, and then jointly optimized as a multi-task objective. For a training sample as defined in Section <ref>, at any time-step t ∈ T, attention weight of any entity word w_k, k ∈ K in the Contextual Adapters is the normalized dot product between the encoder output at that time step t (query) and the catalog encoder output for w_k (key). Let 𝒲_⟨ nb ⟩^t, 𝒲_k^t be the attention weight given to ⟨ nb ⟩ token and the word w_k at time step t respectively. We use these attention weights to directly predict the right entity word from the list. We add a K-class cross-entropy loss (CE loss) as defined in Eq. <ref>. We only want to add this loss for those time-frames in the input audio where the entity word was spoken. Therefore, for all other audio frames, we maximize the attention given to the ⟨ nb ⟩ token, and hence, we weigh the time frames in an inverse proportion to 𝒲_⟨ nb ⟩ ℒ_CE (.) = - ∑_t=1^T (1 - 𝒲_⟨ nb ⟩^t ) log (σ (𝒲_k’^t)) where k’ is the index of the correct word i.e. w_k’ = w^boost and σ(.) is the Softmax function over all the attention weights 𝒲_k^t, k ∈ K. Finally, for training our Contextual Adapters with limited paired text and audio data, we minimize a weighted combination of the two losses i.e. ℒ_net = ℒ_CTC + αℒ_CE. §.§ Multilingual Contextual Adapters For low-resource languages, training Contextual Adapters for custom word boosting is challenging as 1) a weak CTC model outputs sub-optimal audio embeddings, preventing the Contextual Adapters to identify on the right word, and 2) learning the parameters for the Contextual Adapters requires a large volume of labeled data. We add multilingual training data to both CTC and the Contextual Adapters helping in alleviating both the above issues. We investigate two different tracks for multilingual training with three stages for each track. For both these tracks, we first train a baseline CTC encoder model on pooled multilingual data (Stage I). Track "a": We fine-tune the multilingual encoder on the monolingual data for each language (Stage II) and finally freeze the CTC encoder to train a monolingual Contextual Adapter using the outputs from the (presumably) strongest possible encoder model (Stage III). In this track, we address the issue of the weak CTC encoder model. However, the training of the Contextual Adapters is still restricted to a small dataset from a low-resource language. Track "b": We design another track, where we freeze the CTC encoder model first and train the Contextual Adapters by pooling paired speech/text training examples and entity lists from all languages (Stage II), and then fine-tune the full model (encoder and Contextual Adapters) on the monolingual data language for each language (Stage III). The details of this track are highlighted in Figure <ref>. This track helps to mitigate both the issues with training of the Contextual Adapters simultaneously. The multilingual Contextual Adapters helps in bootstrapping the `copying` mechanism for a low-resource language, while the multilingual CTC model acts as a strong baseline. § EXPERIMENTS §.§ Data We evaluate our methods on from five different languages - Portuguese (pt), Italian (it), Spanish (es), English (en) and French (fr). Similar to <cit.>, our training data volume varies from 100 hours for Portuguese (low-resource) to 2000 hours for French (high-resource). Table <ref> highlights the volumes of training and validation dataset for each language. This data comes from a mix of conversational telephony, media, news and call-center, with a mix of various dialects, sampling rates and acoustic conditions. We use the test split of the publicly available Librivox MLS dataset <cit.> for evaluation. It contains an average 8.9 hours of paired audio and text for each language. To showcase the personalization aspect of our models, we provide a custom list of entity words to the model and observe the change in recognition of those specific words. Since we do not have an explicit list of entities curated for our test dataset, we follow the same strategy as <cit.> and randomly choose 500 OOV words per language from references i.e. the words that never appear in training transcripts. Table <ref> provides an overview of the evaluation transcripts, indicating the total number of tokens, as well as the number of tokens corresponding to the OOVs that are chosen as our entity lists. Although these words are only a small fraction of the total number of tokens in the transcripts, they are typically the most useful entity terms for the downstream application. Therefore, along with WERs, we track the recognition of these words using F1 scores (%) directly. The F1 score is determined using the frequency of the custom word in the reference and the hypothesis of the ASR model. §.§ Experimental Setting We use 12-layers of Conformer blocks with 8 self-attention heads, a 1024-dimensional feedforward layer and an input/output size of 384 for our CTC models <cit.>. These models directly predict subword targets generated from a common sentence-piece model for all languages with total vocabulary size of 2048. All our base CTC models are trained for 60 epochs with a maximum learning rate of 3e-3 after 20k warmup steps. We use greedy decoding algorithm for inference. The architecture and hyper-parameters for our Contextual Adapters resembles <cit.>. The total number of trainable parameters in the CTC models and the Contextual Adapters are 42 million and 1.25 million respectively. Since we do not have entities marked in our training data, we choose w^boost to be the word in the utterance with the lowest frequency in the entire training text. W^boost is a random subset of the combined list of such words from all the utterances. Similar to <cit.>, we use curriculum learning for training the Contextual Adapters by increasing the size of W^boost from 30 to 100 over different epochs. For our models with the additional CE loss (ℒ_CE(.)), we choose α=25 based on visual inspection of training loss curves. The ESPnet library <cit.> was used for all our implementations. § RESULTS §.§ Monolingual Training Cross Entropy loss is essential for training Contextual Adapters with limited data: The first set of rows in the Table <ref> summarizes the results of monolingual model training with Contextual Adapters for each of the five languages. For all of the languages except French, the conventional two-stage training of the model (encoder training followed by freezing encoder and training Contextual Adapters with CTC loss) does not improve the F1 score on the lists of custom words (Model IDs: MONO-I and MONO-II). However, adding the CE loss (Sec. <ref>) for the training of the Contextual Adapters results in small F1 score improvements for all languages (Model ID: MONO-II.ce). This suggests that the guidance from the CTC loss alone is insufficient for the Contextual Adapters to learn useful representations in a limited data setting, whereas the proposed CE loss can encourage boosting the right word for the right set of time frames. However, all the improvements for custom entity recognition are modest for monolingual models, likely due to the weak base CTC model that produces encoder layer outputs that are not very useful to identify and boost the right custom word. §.§ Multilingual Training (Track "a") In the next set of rows in the Table <ref>, we compare the performance of Contextual Adapters trained on a strong multilingual CTC encoder baseline. This encoder model is trained using pooled data from all five languages (Stage I). Although simple, this approach results in strong WER improvements for low-resource languages like Portuguese over baseline monolingual models (Model ID: MONO-I and ML-I). This is attributable to the known phenomenon of knowledge transfer from higher-resource to lower-resource languages during multilingual training. Also, as expected, WERs for higher-resource languages (Spanish, English and French) see little to no improvements in WER, or even a slight degradation. In an attempt to train presumably the strongest possible CTC encoder model, we further fine-tune the encoder on monolingual data from each language (Track "a", Stage II). As expected, this improve the WER for all languages over the pooled model (Model IDs: ML-I and ML-II.a). After fine-tuning the encoder model for all the languages, we train Contextual Adapters using monolingual data by freezing the fine-tuned encoder (Track "a", stage III). However, we still do not see any improvement in the custom entity recognition for low-resource languages like Portuguese and Italian (Model IDs: ML-II.a and ML-III.a). But we see that the F1 scores for entity recognition increases by 15% with the Contextual Adapters for other high-resource languages like English and French. This suggests that conventional methods for training Contextual Adapters need sufficient data to learn the 'copying mechanism'. Similar to our observation for monolingual models, adding the CE loss during training of the Contextual Adapters provides minor improvements in the custom entity F1 scores for low-resource languages (Model ID: ML-III.a.ce). §.§ Multilingual Contextual Adapters (Track "b") In the second track, we freeze the multilingual encoder and train multilingual Contextual Adapters (Track "b", Stage II) by pooling paired speech/text training examples and entity lists from all languages. As opposed to the track "a" approach, this results in an improvement in the F1 score of custom words for all languages, even for low-resource languages like Portuguese (Table <ref>, Model IDs: ML-I and ML-II.b). This suggests that Contextual Adapters for low-resource languages can benefit from the data of other languages to learn the `copying mechanism`. Here, we also observe that the Contextual Adapters bring similar performance improvements with and without the proposed CE loss (Model ID: ML-II.b and ML-II.b.ce). This suggests that while the CE loss is essential for training of the Contextual Adapters with limited data, it is unnecessary when sufficient data are available. Contextual Adapters reduce the WER of the base CTC model: Next, we fine-tune both the encoder and the Contextual Adapters from ML-II.b together on monolingual data from each language (Track "b", Stage III). We first highlight the effect of this approach on WER during inference. We find that fine-tuning both the multilingual encoder and the multilingual Contextual Adapters together significantly reduces WER over fine-tuning just the multilingual encoder alone. For low-resource Portuguese, we see a 11% WER reduction, demonstrating that along with personalization, Contextual Adapters also aid in effective fine-tuning of the base encoder model. To determine whether this improvement can be attributed to the improved recognition of rare words by the Contextual Adapters, we also run inference using the jointly fine-tuned encoder alone (Model ID: ML-III.b.inf). We find an expected decrease in F1 scores, since custom entities are no longer being boosted, but overall WER remains largely unchanged. This improved WER after joint fine-tuning can be attributed to the supplementary training objective of Contextual Adapters, which involves distinguishing a target entity term embedded in the audio signal from a pool of random entity terms. This task, in conjugation with prediction of the correct sub-word token sequence, guides the base CTC encoder model to learn more meaningful audio representations. Moreover, our implementation of Contextual Adapters utilizes a weighted combination of features extracted from all Conformer blocks within the CTC encoder, allowing it have a more direct impact on the gradients used to update all layers than CTC loss, which only receives outputs from the final softmax layer. We plan to investigate the benefits of training Contextual Adapters to improve the base encoder model further as part of a future work. Multilingual Contextual Adapters achieve the best custom entity recognition: Apart from achieving the best WER for all the languages (Model ID: ML-III.b), this model also performs the best for custom entity word recognition. Joint fine-tuning results adds improvements over multilingual encoder and Contextual Adapter training only (Model ID: ML-II.b) between  10-20%. For low-resource Portuguese, the F1 score for OOV word recognition is 48% better than the conventional monolingual Contextual Adapters. Even for languages like French with a relatively large amount of training data, there is a substantial improvement in F1 scores from multilingual Contextual Adapters training compared to just monolingual training (55.1% and 70.0%, respectively). The disparity in F1-scores between the two tracks of our multilingual training demonstrates the importance of multilingual data in training Contextual Adapters, and suggests that the copying mechanism it learns is language-agnostic (at least for this set of languages). § CONCLUSION In this work, we demonstrated that approaches such as Contextual Adapters <cit.>, which aim to bias CTC models towards a specific set of words, are ineffective in languages with scarce data.The main factors contributing to this behavior are the complex nature of the training objective, a deficient CTC base model, and insufficient data diversity to facilitate the learning of the boosting mechanism.To mitigate these issues, we first propose a multi-task cross entropy loss to train the Contextual Adapters. Next, we investigate the pooling of multilingual data as a means of transferring knowledge about the boosting mechanism from a high-resource language to a low-resource one. By utilizing our multilingual Contextual Adapters, we were able to achieve superior custom entity recognition, resulting in a 48% increase in F1 score for retrieving these entities, as opposed to a mere 7% improvement achievable with baseline methods. Interestingly, we observe that jointly training the Contextual Adapters and the base CTC model together reduces WER of the base CTC model by 5-11% as a by-product, even when the adapters are not used during inference. IEEEtran
http://arxiv.org/abs/2307.00473v1
20230702045633
Multichannel scattering for the Schrödinger equation on a line with different thresholds at both infinities
[ "P. O. Kazinski", "P. S. Korolev" ]
math-ph
[ "math-ph", "math.MP", "quant-ph" ]
[4] Multichannel scattering for the Schrödinger equation on a line with different thresholds at both infinities P.O. KazinskiE-mail: and P.S. KorolevE-mail: Physics Faculty, Tomsk State University, Tomsk 634050, Russia =================================================================================================================== The multichannel scattering problem for the stationary Schrödinger equation on a line with different thresholds at both infinities is investigated. The analytical structure of the Jost solutions and of the transition matrix relating the Jost solutions as functions of the spectral parameter is described. Unitarity of the scattering matrix is proved in the general case when some of the scattering channels can be closed and the thresholds can be different at left and right infinities on the line. The symmetry relations of the S-matrix are established. The condition determining the bound states is obtained. The asymptotics of the Jost functions and of the transition matrix are derived for a large spectral parameter. § INTRODUCTION The scattering problem for a one-dimensional matrix Schrödinger equation is a classical problem of quantum theory. The exhaustive treatment of general properties of such scattering on semiaxis, in particular, the proof of unitarity of the S-matrix in the presence of closed scattering channels, is given in the book <cit.>, Chap. 17. As far as multichannel scattering on the whole line is concerned, this problem was investigated in many works especially in regard to the inverse scattering problem and the construction of exact solutions to the hierarchies of integrable nonlinear partial differential equations <cit.>. Nevertheless, to our knowledge, the description of properties of the S-matrix, of the Jost solutions, and of the bound states in the general case of multichannel scattering on a line with different thresholds at both left and right infinities is absent in the literature. Our aim is to fill this gap. The study of the analytical structure of the Jost solutions and the S-matrix for one-channel scattering on the whole line in relation to the inverse scattering problem was being conducted already at the end of the 50s <cit.>. As for the relatively recent papers regarding one-channel scattering including scattering on potentials with asymmetric asymptotics at left and right infinities, see, e.g., <cit.>. The general properties of two-channel scattering for both identical and distinct thresholds were considered in <cit.>, the thresholds being the same at both infinities. In the papers <cit.>, these results were generalized to multichannel scattering on a line for the case when all the reaction thresholds coincide at both left and right infinities. The multichannel scattering problem with different thresholds identical at both infinities was investigated in <cit.>. In the works <cit.>, the multichannel scattering problem for systems of a Hamiltonian type with matrix potentials vanishing at both infinities was studied. However, unitarity of the S-matrix in the presence of closed channels was not proved in these papers. In the present paper, we prove unitarity of the scattering matrix for a stationary matrix Schrödinger equation on a line in the general case where the reaction thresholds do not coincide at both infinities and some of the channels are closed. Furthermore, we obtain the other relations connecting the transmission and reflection matrices for open and closed channels. For such a scattering problem, we describe the analytical structure of the Jost solutions and of the transition matrix relating the bases of the Jost solutions. We prove the necessary and sufficient condition specifying the positions of the bound states. The form of this condition is well-known for multichannel scattering (see, e.g., <cit.>). However, we show that this condition also holds for the scattering problem with different thresholds at both left and right infinities. The paper is organized as follows. In Sec. <ref>, the analytical properties of the Jost solutions are described. Sec. <ref> is devoted to analytical properties of the transition matrix. In Sec. <ref>, the basic identities for the scattering matrix are discussed. In Sec. <ref>, the relations between the transmission and reflection matrices in open channels are investigated. Sec. <ref> is devoted to the proof of unitarity of the S-matrix in open channels. In Sec. <ref>, we discuss the necessary and sufficient condition determining the location of bound states. In Sec. <ref>, we obtain the asymptotics of the Jost solutions and of the transition matrix for a large spectral parameter. In Conclusion, we summarize the results. The summation over repeated indices is always understood unless otherwise stated. Furthermore, wherever it does not lead to misunderstanding, we use the matrix notation. § JOST SOLUTIONS AND THEIR ANALYTICAL PROPERTIES Consider the matrix ordinary differential equation [∂_zg_ij(z)∂_z +V_ij(z;)]u_j(z)=0, z∈ℝ, i,j=1,N, where ∈ℂ is an auxiliary parameter, V_ij(z;)=V_ij(z)- g_ij(z), and the matrices g_ij(z) and V_ij(z) are real and symmetric. The elements of these matrices are piecewise continuous functions. The matrix g_ij(z) is positive-definite. We also assume that there exists L_z>0 such that g_ij(z)|_z>L_z=g^+_ij, V_ij(z)|_z>L_z=V^+_ij, g_ij(z)|_z<-L_z=g^-_ij, V_ij(z)|_z<-L_z=V^-_ij, where g^±_ij and V^±_ij are constant matrices. The spectral parameter is not the energy, in general. For example, the energy enters into the matrices g_ij(z) and V_ij(z) as a parameter for the scattering problems in electrodynamics of dispersive media and the physical value of is zero in this case. The corresponding nonstationary scattering problem is not described by the nonstationary Schrödinger equation associated with (<ref>). Notice that all the results of the present paper are applicable to the case where the matrices g_ij(z) and V_ij(z) are not real and symmetric but Hermitian. In that case, one just has to separate the real and imaginary parts of the initial matrix Schrödinger equation. The resulting system of equations will be of the form (<ref>) but of twice the size of the initial system. Let g^±_ijf^±_js^±_s=V^±_ijf^±_js (no summation over s), s=1,N, where ^±_s∈ℝ are eigenvalues and the following normalization condition is true f_±^Tg_± f_±=1. We consider the general case where all of ^+_s and all of ^-_s are different. The degenerate case is obtained by going to the respective limit. Let us introduce the diagonal matrices K^±_ss':=√(^±_s-)_ss', where the principal branch of the square root is chosen. In particular, if >^±_s, then √(^±_s-)=i√(-^±_s). By definition, the Jost solutions to Eq. (<ref>) have the asymptotics (F^+_±)_is(z;)z→∞→(f_+)_is'(e^± iK_+z)_s's, (F^-_±)_is(z;)z→-∞→(f_-)_is'(e^± iK_-z)_s's. For these solutions, we can write F^+_+(z;) =f_+e^iK_+z+∫_z^∞ dtf_+sin K_+(z-t)/K_+f^T_+U_+(t;)F^+_+(t;), F^+_-(z;) =f_+e^-iK_+z+∫_z^∞ dtf_+sin K_+(z-t)/K_+f^T_+U_+(t;)F^+_-(t;), F^-_+(z;) =f_-e^iK_-z-∫_-∞^z dtf_-sin K_-(z-t)/K_-f^T_-U_-(t;)F^-_+(t;), F^-_-(z;) =f_-e^-iK_-z-∫_-∞^z dtf_-sin K_-(z-t)/K_-f^T_-U_-(t;)F^-_-(t;), where U_±(z;):=∂_zg (z)∂_z+V(z;)-∂_zg_±∂_z-V_±+ g_±. In virtue of the assumption (<ref>), the integration in the integral representations of the Jost solutions is performed over a finite interval. Therefore, the Jost solutions are analytic functions of on a double-sheeted Riemann surface. The solutions (F^+_±)_is are the different branches of the same vector-valued analytic function of with the branching point =^+_s, whereas the solutions (F^-_±)_is are the different branches of the same vector-valued analytic function of with the branching point =^-_s. § ANALYTICAL PROPERTIES OF THE TRANSITION MATRIX The Jost solutions F^+_± and F^-_± constitute bases in the space of solutions of Eq. (<ref>). Consequently, F^+_+=F^-_+Φ_++F^-_-Ψ_+, F^+_-=F^-_+Ψ_- +F^-_-Φ_-, where (Φ_±)_ss'() and (Ψ_±)_ss'() are some z-independent matrices. It is clear, that the Wronskian, w[,ψ]:=^T(z)g(z)∂_zψ(z)-∂_z^T(z)g(z)ψ(z), of two solutions (z) and ψ(z) of Eq. (<ref>) is independent of z and defines a skew-symmetric scalar product on the space of solutions of Eq. (<ref>). From the asymptotics (<ref>) we have w[F^±_+,F^±_+]=w[F^±_-,F^±_-]=0, w[F^±_+,F^±_-]=-2iK_±. This skew-symmetric scalar product allows one to express the matrices Φ_± and Ψ_± in terms of Wronskians of the Jost solutions 2iK_-Φ_+ =w[F^-_-,F^+_+], -2iK_-Φ_- =w[F^-_+,F^+_-], -2iK_-Ψ_+ =w[F^-_+,F^+_+], 2iK_-Ψ_- =w[F^-_-,F^+_-]. We see from these relations that (Φ_±)_ss' and (Ψ_±)_ss' are analytic functions of with branching points of the square root type at =^-_s and =^+_s'. The four functions (Φ_±)_ss', (Ψ_±)_ss' are the four branches of the same analytic function of . Indeed, bypassing the branching point =^-_s, we have (K_-)_s→-(K_-)_s, (F^-_±)_is→ (F^-_∓)_is. Then, using (<ref>), we obtain (Φ_±)_ss'→(Ψ_±)_ss', (Ψ_±)_ss'→(Φ_±)_ss'. Similarly, bypassing the branching point =^+_s', we come to (Φ_±)_ss'→(Ψ_∓)_ss', (Ψ_±)_ss'→(Φ_∓)_ss'. Hence, starting from (Φ_+)_ss' and bypassing successively the branching points =^-_s and =^+_s', we obtain all the four functions (Φ_±)_ss', (Ψ_±)_ss'. Consider the action of complex conjugation on the matrices Φ_±() and Ψ_±(). If does not belong to the cuts of the functions (K_-)_s and (K_+)_s', then, using (<ref>) and (<ref>), we obtain the following relations (Φ_±)^*_ss'()=(Φ_∓)_ss'(^*), (Ψ_±)^*_ss'()=(Ψ_∓)_ss'(^*). If lies on the cut of the function (K_-)_s but does not belong to the cut of the function (K_+)_s', then (Φ_±)^*_ss'()=(Ψ_∓)_ss'(). If lies on the cut of the function (K_+)_s' but does not belong to the cut of function (K_-)_s, then (Φ_±)^*_ss'()=(Ψ_±)_ss'(). If lies on the cuts of the functions (K_-)_s and (K_+)_s', we have (Φ_±)^*_ss'()=(Φ_±)_ss'(), (Ψ_±)^*_ss'()=(Ψ_±)_ss'(), i.e., in this case (Φ_±)_ss'() and (Ψ_±)_ss'() are real. § BASIC IDENTITIES FOR THE SCATTERING MATRIX Formulas (<ref>), (<ref>) yield the relations Φ^T_+ K_-Ψ_+-Ψ^T_+K_-Φ_+ =0, Φ^T_+ K_-Φ_- -Ψ^T_+K_-Ψ_- =K_+, Φ^T_- K_-Ψ_- -Ψ^T_-K_-Φ_- =0, Φ_+ K_+^-1Ψ_-^T -Ψ_-K_+^-1Φ_+^T =0, Φ_+ K_+^-1Φ_-^T -Ψ_-K_+^-1Ψ_+^T =K_-^-1, Φ_- K_+^-1Ψ_+^T -Ψ_+K_+^-1Φ_-^T =0. Let us introduce the transmission matrices t_(1,2) and the reflection matrices r_(1,2): F^+_+t_(1)=F^-_++F^-_-r_(1), F^-_-t_(2)=F^+_-+F^+_+r_(2). Combining the relations (<ref>) and comparing the result with (<ref>), we arrive at t_(1)=Φ_+^-1, r_(1)=Ψ_+Φ_+^-1, t_(2)=Φ_- -Ψ_+Φ_+^-1Ψ_-, r_(2)=-Φ_+^-1Ψ_-. The relations (<ref>) imply the symmetry properties K_-t_(2)=t_(1)^T K_+, K_-r_(1)=r_(1)^TK_-, K_+r_(2)=r_(2)^TK_+. Define the S-matrix as S:= [ [ t_(1) r_(2); r_(1) t_(2); ]]. To shorten the notation, we also introduce the operation that acts on the products of matrices Φ_±, Ψ_± and their inverses by the rule Φ̅_±:=Φ_∓, Ψ̅_±:=Ψ_∓, A^-1=(A̅)^-1, AB:=A̅B̅. Then the S-matrix possesses the symmetries [ [ 0 K_-; K_+ 0; ]] S= S^T [ [ 0 K_+; K_- 0; ]], [ [ 0 K_-; K_+ 0; ]] S̅= S̅^T [ [ 0 K_+; K_- 0; ]], S̅^T [ [ K_+ 0; 0 K_-; ]]S= [ [ K_- 0; 0 K_+; ]]. If ∈ℝ belongs to none of the cuts of the functions (K_±)_s, s=1,N, i.e., when all the scattering channels are open, the S-matrix is unitary S^†[ [ K_+ 0; 0 K_-; ]]S= [ [ K_- 0; 0 K_+; ]]. The proof of the theorem follows directly from the last equality in (<ref>) and the property (<ref>). Introducing the notation Φ̃_±:=K_-^1/2Φ_± K_+^-1/2, Ψ̃_±:=K_-^1/2Ψ_± K_+^-1/2, one can reduce (<ref>) to the standard form S̃^†S̃=1, where S̃ is expressed in terms of Φ̃_± and Ψ̃_± in the same way that S is expressed in terms of Φ_± and Ψ_±. § IDENTITIES IN THE SUBSPACE OF OPEN CHANNELS It is more difficult to prove unitarity of the S-matrix in the case when some of the channels are closed for z→-∞ and/or z→∞, i.e., when for some fixed ∈ℝ some of (K_±)_s are purely imaginary. Let the number of open channels for z→-∞ be l_o, and the number of closed channels be l_c. As for the numbers of open and closed channels for z→∞, we introduce the notation r_o and r_c, respectively. For definiteness, we assume that l_o⩾ r_o. It is clear that l_o+l_c=r_o+r_c=N. We split the relations (<ref>) into blocks with respect to the indices s, s' in accordance with the splitting into open and closed channels, (F^+_+)_ot_(1)oo+(F^+_+)_ct_(1)co =(F^-_+)_o+(F^-_-)_or_(1)oo+(F^-_-)_cr_(1)co, (F^+_+)_ot_(1)oc+(F^+_+)_ct_(1)cc =(F^-_+)_c+(F^-_-)_or_(1)oc+(F^-_-)_cr_(1)cc, (F^-_-)_ot_(2)oo+(F^-_-)_ct_(2)co =(F^+_-)_o+(F^+_+)_or_(2)oo+(F^+_+)_cr_(2)co, (F^-_-)_ot_(2)oc+(F^-_-)_ct_(2)cc =(F^+_-)_c+(F^+_+)_or_(2)oc+(F^+_+)_cr_(2)cc, where, for example, t_(1)= [ [ t_(1)oo t_(1)oc; t_(1)co t_(1)cc; ]], F^+_±=[ [ (F^+_±)_o (F^±_±)_c; ]]. Taking complex conjugate of these equations, bearing in mind the above conditions on , and using the expressions for the Jost solutions (<ref>), we arrive at (F^+_-)_ot^*_(1)oo+(F^+_+)_ct^*_(1)co =(F^-_-)_o+(F^-_+)_or^*_(1)oo+(F^-_-)_cr^*_(1)co, (F^+_-)_ot^*_(1)oc+(F^+_+)_ct^*_(1)cc =(F^-_+)_c+(F^-_+)_or^*_(1)oc+(F^-_-)_cr^*_(1)cc, (F^-_+)_ot^*_(2)oo+(F^-_-)_ct^*_(2)co =(F^+_+)_o+(F^+_-)_or^*_(2)oo+(F^+_+)_cr^*_(2)co, (F^-_+)_ot^*_(2)oc+(F^-_-)_ct^*_(2)cc =(F^+_-)_c+(F^+_-)_or^*_(2)oc+(F^+_+)_cr^*_(2)cc. Introducing the notation, (K_±)_o=:_±^o, (K_±)_c=:i_±^c, ^o,c_±>0, the symmetry relations (<ref>) become ^o_- t_(2)oo=(t_(1)oo)^T ^o_+, ^o_- t_(2)oc=(t_(1)co)^T i^c_+, i^c_- t_(2)co=(t_(1)oc)^T ^o_+, ^c_- t_(2)cc=(t_(1)cc)^T ^c_+, ^o_- r_(1)oo=(r_(1)oo)^T ^o_-, ^o_- r_(1)oc=(r_(1)co)^T i^c_-, i^c_- r_(1)co=(r_(1)oc)^T ^o_-, ^c_- r_(1)cc=(r_(1)cc)^T ^c_-, ^o_+ r_(2)oo=(r_(2)oo)^T ^o_+, ^o_+ r_(2)oc=(r_(2)co)^T i^c_+, i^c_+ r_(2)co=(r_(2)oc)^T ^o_+, ^c_+ r_(2)cc=(r_(2)cc)^T ^c_+. Without loss of generality, we can assume that the rank of the matrix t_(1)oo is maximal and is equal to r_o. Then it follows from the first relation in (<ref>) that the rank of the matrix t_(2)oo is also r_o. In this case we have t_(1)oot^∨_(1)oo=t^∨_(2)oot_(2)oo=1, t^∨_(1)oot_(1)oo=:L_(1), t_(2)oot^∨_(2)oo=:L_(2), where A^∨ is a pseudo-inverse matrix to A, and L_(1) and L_(2) are Hermitian projectors of rank r_o in the subspace of open channels at z→-∞. It follows from the definition of pseudo-inverse matrix that t_(1)ooL̅_(1)=L̅_(1)t^∨_(1)oo=0, L̅_(2)t_(2)oo=t^∨_(2)ooL̅_(2)=0, where L̅_(1,2):=1-L_(1,2). Further, we express the functions (F^+_-)_o from (<ref>) and substitute them into (<ref>). This gives rise to (F^+_+)_o +(F^+_+)_c[r^*_(2)co -t^*_(1)co(t^*_(1)oo)^∨ r^*_(2)oo]= =(F^-_+)_o[t^*_(2)oo-r^*_(1)oo(t^*_(1)oo)^∨ r^*_(2)oo] -(F^-_-)_o(t^*_(1)oo)^∨ r^*_(2)oo +(F^-_-)_c[t^*_(2)co-r^*_(1)co(t^*_(1)oo)^∨ r^*_(2)oo]. Then multiplying (<ref>) by t_(1)oo^∨ and comparing the result with the equation above, we have t_(1)oo^∨ =t^*_(2)oo-r^*_(1)oo(t^*_(1)oo)^∨ r^*_(2)oo, r_(1)ooL_(1) =-(t^*_(1)oo)^∨ r^*_(2)oo t_(1)oo, t_(1)coL_(1) =[r^*_(2)co-t^*_(1)co(t^*_(1)oo)^∨ r^*_(2)oo] t_(1)oo, r_(1)coL_(1) =[t^*_(2)co-r^*_(1)co(t^*_(1)oo)^∨ r^*_(2)oo] t_(1)oo. We infer from (<ref>) that L̅^*_(1)r_(1)ooL_(1)=0. Then (<ref>) implies L̅_(1)t^*_(2)oo=0. Consequently, L̅_(1)L^*_(2)=0 ⇒ L_(1)L^*_(2)=L^*_(2)=L^*_(2)L_(1). As the ranks of L_(1) and L^*_(2) are the same, we have L_(1)=L^*_(2)=:L. The first relation in (<ref>) implies t^∨_(2)oo=(_+^o)^-1(t^T_(1)oo)^∨_-^o L^*, whence L^*=t_(2)oot^∨_(2)oo=(_+^o)^-1L^T _+^oL^*. Therefore, _+^oL =L_+^oL= L_+^o. Thus we see that L is a diagonal matrix and hence L^*=L. § UNITARITY IN THE SUBSPACE OF OPEN CHANNELS Now we are in position to prove unitarity of the S-matrix in open channels as well as to obtain the other relations involving the different components of t_(1,2) and r_(1,2). The S-matrix in the subspace of open channels is unitary. It follows from (<ref>) that t^*_(1)oo r_(1)ooL+r^*_(2)oot_(1)oo=0 ⇒ t^*_(1)oo r_(1)oo+r^*_(2)oot_(1)oo=0, where the properties (<ref>), (<ref>) have been taken into account in the last equality. Given the relation (<ref>), equality (<ref>) implies t^*_(2)oot_(1)oo+r^*_(1)oor_(1)ooL=L. Using the symmetry relations (<ref>) in (<ref>), we obtain r_(1)oot^*_(2)oo+t_(2)oor^*_(2)oo=0. Taking complex conjugate of (<ref>) and multiplying the resulting expression by t^∨_(1)oo from the left and by t_(1)oo from the right, using (<ref>), (<ref>), we come to t^*_(1)oot_(2)oo+r^*_(2)oor_(2)oo=1. The relations (<ref>), (<ref>), (<ref>), (<ref>) lead to the unitarity relations for the S-matrix in open channels. Indeed, substituting the symmetry relations (<ref>) into these expressions, we have t^†_(2)oo^o_-r_(1)ooL+r^†_(2)oo^o_+t_(1)oo =0, t^†_(1)oo^o_+ t_(1)oo +r^†_(1)oo^o_-r_(1)ooL =^o_-L, r^†_(1)oo^o_-t_(2)oo+t^†_(1)oo^o_+r_(2)oo =0, t^†_(2)oo^o_-t_(2)oo +r^†_(2)oo^o_+r_(2)oo =1, where the complex conjugate equation (<ref>) has been used in the third equality. There are also the relations for the matrix r_(1)oo. In order to deduce them, multiply (<ref>) by L̅r_(1)ooL̅ from the right and compare with (<ref>) multiplied by L̅ from the right. As a result, we have t_(1)coL̅=t^*_(1)coL̅r_(1)ooL̅, r_(1)coL̅=r^*_(1)coL̅r_(1)ooL̅, L̅=r^*_(1)ooL̅r_(1)ooL̅. Combining the last relation with (<ref>), we find t^*_(2)oot_(1)oo+r^*_(1)oor_(1)oo=1. As for the second relation in (<ref>), it is written as t^†_(1)oo^o_+ t_(1)oo +r^†_(1)oo^o_-r_(1)oo=^o_-. The relations (<ref>), (<ref>) are nothing but the unitarity relations for the S-matrix (<ref>) in the subspace of open channels. There are also the additional relations connecting the components t_(1,2) and r_(1,2) of closed and open channels. The following relations hold t_(1)cc =t^*_(1)cc-r_(2)cot^*_(1)oc-t_(1)cor^*_(1)oc, r_(1)cc =r^*_(1)cc-t_(2)cot^*_(1)oc -r_(1)cor^*_(1)oc, t_(1)co =t^*_(1)cor_(1)oo+r^*_(2)cot_(1)oo, r_(1)co =t^*_(2)cot_(1)oo+r^*_(1)cor_(1)oo, t_(1)oc =-r_(2)oot^*_(1)oc-t_(1)oor^*_(1)oc, r_(1)oc =-t_(2)oot^*_(1)oc-r_(1)oor^*_(1)oc. There are also the relations obtained from these ones by replacing 1↔2. To deduce these relations, we express (F^-_+)_o and (F^+_-)_o from (<ref>) times L̅, (<ref>) times L, and (<ref>). Then we substitute these Jost solutions into (<ref>) and compare the result with (<ref>). Further, we also substitute (F^-_+)_o and (F^+_-)_o found in this way into (<ref>) and compare the result with (<ref>). Having carried out this, we arrive at the relations from the statement of the proposition. § BOUND STATES Let Φ_+()=0 for some ∈ℂ. Then ∃ v()≠0, w()≠0: Φ_+()v()=0, w^T()Φ_+()=0. In a general position, the rank of Φ_+() drops by one at the given point . Therefore, we can assume that the conditions (<ref>) determine the vectors v and w uniquely up to multiplication by a constant. It follows from the first relation in (<ref>) that Φ_+^T K_-Ψ_+v=0 ⇒ Ψ_+v=K_-^-1w. By multiplying the first relation in (<ref>) by v from the right, we obtain a particular solution to Eq. (<ref>) of the form F^+_+v=F^-_-Ψ_+v=F^-_-K_-^-1w. Given the asymptotic behavior of Jost solutions (<ref>), this solution decreases exponentially for z→±∞, provided that √(_s^±-)∈(0,π) for all s, whereas it increases exponentially for z→±∞ provided that √(_s^±-)∈(-π,0) for all s. Note that for the branch of the root we have chosen, the values of the argument (-π,-π/2)∪(π/2,π) correspond to the second sheet of the Riemann surface. Since Eq. (<ref>) is a spectral problem for a self-adjoint operator, this equation cannot possess square-integrable solutions for ∉ℝ. Hence Φ_+()≠0 when ∉ℝ and √(_s^±-)∈(0 ,π) for all s. In virtue of the restrictions (<ref>) on the asymptotic behavior of g_ij(z) and V_ij(z), the bound solutions to Eq. (<ref>) tend exponentially to zero when z→±∞. Therefore, as the Jost solutions F^+_± and F^-_± constitute bases in the space of solutions to Eq. (<ref>), the condition (<ref>) is a necessary and sufficient condition for the existence of a bound state provided ∈ℝ and v_o=0 and w_o=0. The corresponding bound state is given by formula (<ref>). The following obvious assertion is valid. If all scattering channels are open, there are no bound states. Let ∈ℝ be such that all the scattering channels are open, i.e., all √(_s^±-) are real. Then, multiplying the second relation in (<ref>) by v^T from the left and by v^* from the right, we arrive at -v^TΨ_+^TK_-Ψ_-v^*=-v^TΨ_+^TK_-(Ψ_+v)^*=v^TK_+v^*. The expression on the left-hand side is negative-definite, while the expression on the right-hand side is positive-definite. Therefore, v=0. Now let ∈ℝ be such that some of the scattering channels are closed, as described before formulas (<ref>), and the condition (<ref>) is satisfied. Then The particular solution (<ref>) includes only those Jost solutions that correspond to closed channels, i.e., v_o=0 and w_o=0. This is a bound state. Partition the matrix Φ_+ into blocks Φ_+= [ [ (Φ_+)_11 (Φ_+)_12; (Φ_+)_21 (Φ_+)_22; ]], where the block (Φ_+)_11 has dimensions r_o× r_o and acts from the open channels on the right to the open channel subspace on the left, this subspace being distinguished by the projector L. Then there are the relations (Φ_+)_11v_1+(Φ_+)_12v_2 =0, (Φ_+)_21v_1+(Φ_+)_22v_2 =0, w_1^T(Φ_+)_11+w_2^T(Φ_+)_21 =0, w_1^T(Φ_+)_12+w_2^T(Φ_+)_22 =0, and t^-1_(1)11=(Φ_+)_11-(Φ_+)_12(Φ_+)^-1_22(Φ_+)_21. Acting on the last expression by w_1^T and v_1 from the left and from the right, respectively, and employing the relations (<ref>), we see that w_1 and v_1 are the left and right null vectors of t^- 1_(1)11. However, the unitarity relation (<ref>) implies that t_(1)11=t_(1)ooL is a bounded operator. Hence w_1=Lw_o=0 and v_1=v_o=0. It follows from the first relation in (<ref>) that (Φ_+)_oot_(1)oo+(Φ_+)_oct_(1)co=1, (Φ_+)_cot_(1)oo+(Φ_+)_cct_(1)co=0. Whence, multiplying both the equalities by L̅ from the right, we obtain (Φ_+)_oct_(1)coL̅=L̅, (Φ_+)_cct_(1)coL̅=0. Since w_o^T(Φ_+)_oo+w_c^T(Φ_+)_co=0, w_o^T(Φ_+)_oc+w_c^T(Φ_+)_cc=0, multiplying the first relation in (<ref>) by w_o^T from the left and using the second relation in (<ref>), we arrive at w_o^TL̅=-w_c^T(Φ_+)_cct_(1)coL̅=0, where the last equality follows from the second relation in (<ref>). Thus, we have proved that v_o=0, w_o=0. As a result, both the left- and right-hand sides of equality (<ref>) defining a particular solution to Eq. (<ref>) include only those Jost solutions that correspond to the closed scattering channels. Therefore, this particular solution decreases exponentially as z→±∞ and so it is a bound state. Notice that v_c≠0 and w_c≠0 since otherwise v=0 and w=0 in contradiction with (<ref>). Thus we have proved the theorem The following condition Φ_+()=0, ∈ℝ, is a necessary and sufficient condition for the existence of bound states of Eq. (<ref>). § SHORTWAVE ASYMPTOTICS Notice that S()=Φ_-()/Φ_+()=Φ̃_-()/Φ̃_+()=S̃(). Let us show that Φ_±()→1 and Φ̃_±()→1 as ||→∞ and so S()→1 and S̃()→1 in this limit. Let us find the explicit expressions for the Jost solutions F̃^+_±:=F^+_± K_+^-1/2, F̃^-_±:=F^-_± K_-^-1/2, when ||→∞. In this limit, one can use the semiclassical matrix approximation <cit.> to obtain a solution of Eq. (<ref>). We are looking for a solution of Eq. (<ref>) in the form f_is(z)e^iS_s(z) (no summation over s). Then, in leading order, we have <cit.> g_ij(z)f_js(z)_s(z)=V_ij(z)f_js(z) (no summation over s), where _s(z)∈ℝ are eigenvalues and K_s(z)=S'_s(z)=√(_s(z)-), f^T(z)g(z)f(z)=1. The following relations are also fulfilled [K_s(z) f^†_s(z)g(z)f_s(z)]'=0, [f^†_s(z)g(z)f'_s(z)]=0. As a result, we obtain the semiclassical expressions for the Jost solutions (F̃^+_±)_is(z)=f_is(z)/K^1/2_s(z)e^± iS^+_s(z), (F̃^-_±)_is(z)=f_is(z)/K^1/2_s(z)e^± iS^-_s(z), where S^+_s(z)=(K_+)_sL_z-∫_z^L_z dz'K_s(z'), S^-_s(z)=∫_-L_z^z dz'K_s(z')-(K_-)_sL_z, and L_z was defined in (<ref>). One can replace L_z in the expressions for S_s^±(z) by any real number greater than or equal to L_z. Using the first relation in (<ref>), we derive 2i(Φ̃_+)_ss'=w[(F̃^-_-)_s,(F̃^+_+)_s']=2i_ss'e^i(S_s^+-S_s^-). In the last equality, the semiclassical expressions (<ref>) for the Jost solutions have been employed and S^+_s(z)-S^-_s(z)=[(K_+)_s +(K_-)_s]L_z-∫_-L_z^L_zdz' K_s(z'). It is evident from the explicit expressions for K_±() and K() that S^+_s(z)-S^-_s(z)||→∞→0. Therefore, (Φ̃_+)_ss'||→∞→_ss', and Φ̃_+()||→∞→1. As long as Φ̃_-()=Φ̃^*_+(^*) outside the cuts, the asymptotics (<ref>) is also valid for Φ̃_-(). It is clear that the same asymptotics hold for Φ_±(). § CONCLUSION Let us summarize the results. We have considered a multichannel stationary scattering problem for the one-dimensional Schrödinger equation with a potential V_ij(z) and an inverse mass matrix g_ij(z), where z∈ℝ. The matrices V_ij(z) and g_ij(z) are assumed to be real, symmetric, and equal to constant values for sufficiently large |z|. Their matrix elements are supposed to be piecewise continuous functions. The matrix g_ij(z) is assumed to be positive-definite. We consider the general case and do not suppose that the asymptotics of the matrices V_ij(z) and g_ij(z) for z→-∞ and for z→∞ coincide. The analytical structure of the Jost solutions (F^+_±)_is and (F^-_±)_is as functions of the auxiliary spectral parameter has been investigated. It has been shown that the Jost solutions (F^+_±)_is are the different branches of the same vector-valued analytic function on a double-sheeted Riemann surface. The same is true for (F^-_±)_is but with different branch points. It has also been shown that the matrix-valued functions that form the transition matrix between the bases (F^+_±)_is and (F^-_±)_is are the different branches of the same analytic matrix-valued function. The multichannel scattering matrix has been investigated. The key result of this paper is the Theorem <ref> that proves unitarity of the scattering matrix in the subspace of open channels. One would expect that the scattering matrix should be unitary in the subspace of open channels on physical grounds. It is also clear that in the general case the complete S-matrix is not unitary in the presence of closed channels. Nevertheless, the proof of this fact is nontrivial and appears to be obtained in this paper for the first time. A weak version of this theorem, viz., the statement about unitarity of the S-matrix in the case when all the scattering channels are open, has also been proved (Theorem <ref>). The proof of the latter statement is known in the literature in the case when the asymptotics of the matrices g_ij(z) and V_ij(x) as z→±∞ coincide <cit.>. In addition to the unitarity relations for the S-matrix in the subspace of open scattering channels, the other relations connecting the components of the reflection and transmission matrices in the subspaces of closed and open channels have been deduced. The condition determining the bound states has been obtained. In particular, it has been shown that the necessary condition for the presence of bound states is the presence of a nonzero subspace of closed channels. The asymptotics of the Jost solutions and of the transition matrix at a large spectral parameter have been investigated. It has been shown that the Schrödinger equation under study is solvable in the shortwave approximation. The explicit expressions for the Jost solutions in the semiclassical approximation and the asymptotics of the transition matrix between the bases constituted by the Jost solutions have been obtained. The results of the paper are applicable in electrodynamics of continuous media <cit.>, in sound wave propagation theory <cit.>, in describing the passage of electrons through heterostructures <cit.>, in quantum chemistry <cit.>, in hydrodynamics and plasma physics <cit.>, etc. In particular, the issue of proving unitarity of the S-matrix in open channels arises in describing photon scattering by metamaterials with a large spatial dispersion. The presence of spatial dispersion is caused by the presence of additional degrees of freedom – the plasmon polaritons, which exist only inside the medium. As a result, there are always the closed channels in scattering of photons by such media, and unitarity becomes less obvious from the physical point of view. In this paper, we prove that the S-matrix is unitary for such systems as well provided the appropriate boundary conditions on the additional degrees of freedom are imposed. Acknowledgments. This study was supported by the Tomsk State University Development Programme (Priority-2030). 999 Newton_scat Newton, R. G.: Scattering Theory of Waves and Particles. Springer-Verlag, New York (1982) Zakharov1980 Zakharov, V. E., Manakov, S. V., Novikov, S. P., Pitaevskiy, L. P.: Theory of solitons: method of inverse problem [in Russian]. Nauka, Moscow (1980) Faddeev1958 Faddeev, L. D.: On the connection between the S-matrix and the potential for the one-dimensional Schrödinger operator [in Russian]. DAN SSSR 121, 63-66 (1958) Kay1960 Kay, I.: The inverse scattering problem when the reflection coefficient is a rational function. Commun. Pure Appl. Math. 13, 371-393 (1960) Faddeev1964 Faddeev, L. D.: Properties of the S-matrix of the one-dimensional Schrödinger equation [in Russian]. Trudy Mat. Inst. Steklov 73, 314 (1964) Faddeev1974 Faddeev L. D.: Inverse Problem in quantum scattering theory. II [in Russian]. Itogi nauki i techn. Ser. Sovrem. probl. mat. 3, 93-180 (1974) Newton1980 Newton, R. G.: Inverse scattering. I. One dimension. J. Math. Phys. 21, 493-505 (1980) Aktosun1992 Aktosun, T., Klaus, M., van der Mee, C.: Scattering and inverse scattering in one-dimensional nonhomogeneous media. J. Math. Phys. 33, 1717-1744 (1992) Aktosun1994 Aktosun, T.: Bound states and inverse scattering for the Schrödinger equation in one dimension. J. Math. Phys. 35, 6231-6236 (1994) Gesztesy1997 Gesztesy, F., Nowell, R., Pötz, W.: One-dimensional scattering theory for quantum systems with nontrivial spatial asymptotics. Differ. Integral Equ. 10, 521-546 (1997) Ostrovsky2005 Ostrovsky, V. N., Elander, N.: Scattering resonances and background associated with an asymmetric potential barrier via Siegert pseudostates. Phys. Rev. A 71, 052707 (2005) deMonvel2008 Boutet de Monvel, A., Egorova, I., Teschl, G.: Inverse scattering theory for one-dimensional Schrödinger operators with steplike finite-gap potentials. J. d'Analyse Math. 106, 271-316 (2008) Ning1995 Ning, W.: Inverse scattering problem for the coupled second order ODE. J. Phys. Soc. Japan 64, 4589-4597 (1995) melgaard2001 Melgaard, M.: Spectral Properties at a Threshold for Two-Channel Hamiltonians: II. Applications to Scattering Theory. J. Math. Anal. Appl. 256, 568-586 (2001) melgaard2002 Melgaard, M.: On bound states for systems of weakly coupled Schrödinger equations in one space dimension. J. Math. Phys. 43, 5365-5385 (2002) vanDijk2008 van Dijk, W., Spyksma, K., West, M.: Nonthreshold anomalous time advance in multichannel scattering. Phys. Rev. A 78, 022108 (2008) Wadati1974 Wadati, M., Kamijo, T.: On the Extension of Inverse Scattering Method. Prog. Theor. Phys. 52, 397414 (1974) Calogero1976 Calogero, F.: Generalized Wronskian relations one dimensional Schroedinger equation and nonlinear partial differential equations solvable by the inverse scattering method, Nuovo Cim. B 31 229-249 (1976) Wadati1980 Wadati, M.: Generalized matrix form of the inverse scattering method. Solitons, 17, 287-299 (1980) Alonso1982 Alonso, L. M., Olmedilla, E.: Trace identities in the inverse scattering transform method associated with matrix Schrödinger operators. J. Math. Phys. 23, 2116-2121 (1982) Olmedilla1985 Olmedilla, E.: Inverse scattering transform for general matrix Schrödinger operators and the related symplectic structure. Inverse Prob. 1, 219 (1985) Zakhariev1990 Zakhariev, B. N., Suzko, A. A.: Direct and inverse problems: potentials in quantum scattering. Springer, Heidelberg (1990) Kiers1996 Kiers, K. A., Van Dijk, W.: Scattering in one dimension: The coupled Schrödinger equation, threshold behaviour and Levinson's theorem. J. Math. Phys. 37, 6033-6059 (1996) Aktosun2001 Aktosun, T., Klaus, M., van der Mee, C.: Small-energy asymptotics of the scattering matrix for the matrix Schrödinger equation on the line. J. Math. Phys. 42, 4627-4652 (2001) Corona2004 Corona-Corona, G.: A Wronskian of Jost solutions. J. Math. Phys., 45, 4282-4287 (2004) Bondarenko2017 Bondarenko, N.: Inverse scattering on the line for the matrix SturmLiouville equation. J. Differ. Equ. 262, 2073-2105 (2017) Sofianos1997 Sofianos, S. A., Braun, M., Lipperheide, R., Leeb, H.: Coupled-channel Marchenko inversion in one dimension with thresholds. Lect. Notes Phys. 488, 54 (1997) Braun2003 Braun, M., Sofianos, S. A., Leeb, H.: Multichannel inverse scattering problem in one dimension: Thresholds and bound states. Phys. Rev. A 68, 012719 (2003) Aktosun2000 Aktosun, T., Klaus, M., van der Mee, C.: Direct and inverse scattering for selfadjoint Hamiltonian systems on the line. Integral Equ. Oper. Theory. 38, 129-171 (2000) Sakhnovich2003 Sakhnovich, A.: Dirac type system on the axis: explicit formulae for matrix potentials with singularities and solitonpositon interactions. Inverse Probl. 19, 845 (2003) BabBuld Babič, V. M., Buldyrev, V. S.: Short-Wavelength Diffraction Theory: Asymptotic Methods. Springer, Berlin (1991) Maslov Maslov, V. P.: The Complex WKB Method for Nonlinear Equations I. Linear Theory. Springer, Basel (1994) BagTrif Bagrov, V. G., Belov, V. V., Trifonov, A. Yu.: Methods of Mathematical Physics: Asymptotic Methods in Relativistic Quantum Mechanics [in Russian]. Tomsk Polytechnic University Press, Tomsk (2006) RMKD18 Reijnders, K. J. A., Minenkov, D. S., Katsnelson, M. I., Dobrokhotov, S. Yu.: Electronic optics in graphene in the semiclassical approximation. Annals Phys. 397, 65 (2018) BKKL21 Bogdanov, O. V., Kazinski, P. O., Korolev, P. S., Lazarenko, G. Yu.: Generation of hard twisted photons by charged particles in cholesteric liquid crystals. Phys. Rev. E 104, 024701 (2021). BelyakovBook Belyakov, V.: Diffraction Optics of Complex-Structured Periodic Media. Springer, Cham (2019). KazKor22 Kazinski, P. O., Korolev, P. S.: Scattering of plane-wave and twisted photons by helical media. J. Phys. A: Math. Theor. 55, 395301 (2022) BKKL23 Bogdanov, O. V., Kazinski, P. O., Korolev, P. S., Lazarenko, G. Yu.: Short wavelength band structure of photons in cholesteric liquid crystals. J. Mol. Liq. 371 121095 (2023) Oldano2000 Oldano, C., Ponti, S.: Acoustic wave propagation in structurally helical media. Phys. Rev. E 63, 011703 (2000) Mabuza2005 Mabuza, B. R.: Applied inverse scattering. PhD thesis, University of South Africa, Pretoria (2005) Georgiannis2011 Gerogiannis, D., Sofianos, S. A., Lagaris, I. E., Evangelakis, G. A.: One-dimensional inverse scattering problem in acoustics. Braz. J. Phys. 41, 248257 (2011) Liu1996 Liu, Y. X., Ting, D. Z.-Y., McGill, T. C.: Efficient, numerically stable multiband kṗ treatment of quantum transport in semiconductor heterostructures. Phys. Rev. B 54, 5675 (1996) Sofianos2007 Sofianos, S. A., Rampho, G. J., Azemtsa Donfack, H., Lagaris, I. E., Leeb, H.: Design of quantum filters with pre-determined reflection and transmission properties. Microelectronics J. 38 235244 (2007) Botha2008 Botha, A. E.: Design of semiconductor heterostructures via inverse quantum scattering. Mod. Phys. Lett. B 22, 2151-2161 (2008) Miroshnichenko2010 Miroshnichenko, A. E., Flach, S., Kivshar, Y. S.: Fano resonances in nanoscale structures. Rev. Mod. Phys. 82, 2257-2298 (2010) Shubin2019 Shubin, N. M.: The study of resonances and antiresonances in quantum conductors and in the elements of molecular nanoelectronic based on them. PhD thesis, Moscow Insitute of Electronic Technology, Moscow (2019) Friedman1999 Friedman, R. S., Allison, T. C., Truhlar, D. G.: Exciplex funnel resonances in chemical reaction dynamics: The nonadiabatic tunneling case associated with an avoided crossing at a saddle point. Phys. Chem. Chem. Phys. 1 1237-1247 (1999) Adam1986 Adam, J. A.: Critical layer singularities and complex eigenvalues in some differential equations of mathematical physics. Phys. Rep. 142, 263-356 (1986)
http://arxiv.org/abs/2307.01442v1
20230704022710
Quantized generalized minimum error entropy for kernel recursive least squares adaptive filtering
[ "Jiacheng He", "Gang Wang", "Kun Zhang", "Shan Zhong", "Bei Peng", "Min Li" ]
cs.IT
[ "cs.IT", "eess.SP", "math.IT" ]
Journal of Class Files, Vol. 14, No. 8, August 2021 Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals Quantized generalized minimum error entropy for kernel recursive least squares adaptive filtering (This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible.) Jiacheng He, Gang Wang, Kun Zhang, Shan Zhong, Bei Peng, Min Li This paper was produced by the IEEE Publication Technology Group. They are in Piscataway, NJ. Manuscript received April 19, 2021; revised August 16, 2021. August 1, 2023 ========================================================================================================================================================================================================================================================================= The robustness of the kernel recursive least square (KRLS) algorithm has recently been improved by combining them with more robust information-theoretic learning criteria, such as minimum error entropy (MEE) and generalized MEE (GMEE), which also improves the computational complexity of the KRLS-type algorithms to a certain extent. To reduce the computational load of the KRLS-type algorithms, the quantized GMEE (QGMEE) criterion, in this paper, is combined with the KRLS algorithm, and as a result two kinds of KRLS-type algorithms, called quantized kernel recursive MEE (QKRMEE) and quantized kernel recursive GMEE (QKRGMEE), are designed. As well, the mean error behavior, mean square error behavior, and computational complexity of the proposed algorithms are investigated. In addition, simulation and real experimental data are utilized to verify the feasibility of the proposed algorithms. kernel recursive least square, quantized generalized minimum error entropy, quantized kernel recursive generalized minimum error entropy. § INTRODUCTION The use of kernel approaches in different machine learning and signal processing problems, including classification, regression, clustering, and feature selection, has gained some traction. Support vector machines (SVM) <cit.>, kernel regularization networks (KRN) <cit.>, and kernel principal component analysis (KPSA) <cit.>, among others <cit.>, are effective examples of kernel approaches. The major benefit of kernel approaches is their capacity to universally and linearly describe nonlinear functions in Reproducing Kernel Hilbert Space (RKHS) caused by a Mercer kernel. Kernel affine projection algorithms (KAPAs) <cit.>, kernel least mean square (KLMS) <cit.>, kernel recursive least squares (KRLS) <cit.>, extended kernel recursive least squares (EX-KRLS) <cit.>, and others are examples of typical kernel adaptive filtering (KAF) algorithms. By assigning a new kernel unit for each fresh sample with the input as the center, radial kernel function-based KAF create a linearly increasing radial basis function (RBF) network. The errors at each sample are correlated with the coefficients corresponding to each center. The KRLS is particularly well known for its consistent performance in nonlinear systems with the presence of Gaussian noise. Non-Gaussian noise, however, frequently interferes with several actual applications, including underwater communications <cit.>, parameter identification <cit.>, and acoustic echo cancellation <cit.>. The presence of the non-Gaussian noise makes the performance of the KRLS algorithm based on the mean square error (MSE) criterion significantly worse. Thus, new adaptation criteria based on information theoretic learning (ITL) <cit.> are proposed to overcome these challenges. They are able to benefit from a distribution's higher-order statistics. Examples of ITL criteria that are frequently used to improve the learning efficacy of algorithms include the maximum correntropy criterion (MCC) <cit.>, the generalized maximum correntropy criterion (GMCC) <cit.>, and the minimum error entropy (MEE) criterion <cit.>. Using the aforementioned learning criteria, the KLMS and KRLS algorithms are integrated with the MCC, and the kernel maximum correntropy (KMC) algorithm <cit.> and the kernel recursive maximum correntropy (KRMC) algorithms <cit.> are developed respectively. To further improve the performance of the KAF algorithms, the GMCC is combined with KAF algorithms, as a result of which the generalized kernel maximum correntropy (GKMC) <cit.> and the kernel recursive generalized maximum correntropy (KRGMCC) <cit.> are developed. MEE criterion is considered to have better performance than MCC <cit.>, and the kernel MEE (KMEE) algorithm <cit.> and kernel recursive MEE (KRMEE) algorithm <cit.> are investigated. Due to its smoothness and stringent positive definiteness, the Gaussian kernel function is generally chosen as the kernel function of the original MEE. The Gaussian kernel function, however, may not always be the optimal choice <cit.>. The introduction of the generalized Gaussian density leads to the development of the generalized minimum error entropy (GMEE) criterion <cit.>. Furthermore, the kernel recursive GMEE (KRGMEE) algorithm with better performance is also derived in <cit.>. Due to double summation of GMEE criteria, the computational complexity of the information potential (IP), the fundamental GMEE cost in ITL, is quadratic in terms of sample number. The introduction of GMEE improves the performance of the KRLS algorithm while enhancing the computational complexity to some extent, especially for large-scale data sets. In this study, the quantizer <cit.> is utilized to reduce the computational complexity of IP of GMEE and MEE criteria. The quantized kernel recursive GMEE (QKRGMEE) algorithm and the quantized kernel recursive MEE (QKRMEE) are created by combining the QGMEE criterion with the KRLS method, and QKRMEE is a specific variant of the QKRGMEE algorithm. In addition, several properties of the QGMEE criterion are discussed to further refine the theoretical framework of QGMEE. The mean error behavior and mean square error behavior of the proposed quantized algorithm are presented. Moreover, the performance and computational complexity of the presented quantized methods are compared with KRGMEE and KRGMEE algorithms. The main contributions of this study are the following. (1) The properties of the proposed QGMEE criterion are analysed and discussed. (2) Two quantized kernel recursive least squares algorithms (QKRMEE and QKRGMEE), with lower computational complexity, are proposed. (3) The performance of the proposed algorithm is verified using Electroencephalogram (EEG) data. The remainder of the study is organized as follows. The properties of the QGMEE are shown in Section <ref>. The proposed KRLS algorithms are presented in Section <ref>. Section <ref> and Section <ref> are performance analyses and simulations, respectively. Finally, Section <ref> provides the conclusion. § THE PROPERTIES OF THE QUANTIZED GENERALIZED ERROR ENTROPY §.§ Quantized generalized error entropy From <cit.>,the definition of the quantized generalized error entropy is H_μ( e ) = 1/1 - μlog V_α ,β^μ( e ), where μ( μ 1,μ > 0) stands for the entropy order and e for the error between X and Y. A continuous variable IP V_α ,β^μ( e ) can be written as V_α ,β^μ( X,Y) = V_α ,β^μ( e ) = ∫p_α ,β^μ( e )de = E[ p_α ,β^μ - 1( e )], where E[ ·] represents the expectation operation, and μ is set to 2 in <cit.>. p_α ,β( ·) denotes the PDF of error e, as well as p_α ,β( ·) can be estimated using p_α ,β( e ) ≈p̂_α ,β( e ) = 1/L∑_i = 1^L G_α ,β( e - e_i) . Only a limited number of error sets {e_i}_i = 1^L may be obtained in actual applications, and L is the length of Parzen window. Substituting (<ref>) into (<ref>) with μ = 2, and one method for estimating the IP V_α ,β( X,Y) is V̂_α ,β( X,Y) = V̂_α ,β( e) = 1/L∑_i = 1^L p̂_α ,β( e_i) = 1/L^2∑_i = 1^L ∑_j = 1^L G_α ,β( e_i - e_j), with G_α ,β( e ) = α/2βΓ( 1/α)exp( - | e |^α/β ^α), where G_α ,β( ·) the generalized Gaussian density function <cit.>, α and β > 0 refer to the shape and scale parameters, | ·| is taking the absolute value, and Γ( ·) stands for the gamma function. Using estimator (<ref>), the quadratic IP may be calculated. A quantizer Q( e_i,γ) ∈ C <cit.> (quantization threshold γ is employed to obtain a codebook C = {c_1,c_2, ⋯c_H∈ℝ^1} in order to lessen the computational load on GMEE. The empirical information potential can be simplified to V̂_α ,β( e) = 1/L∑_i = 1^L p̂_α ,β( e_i) ≈V̂_α ,β^Q( e) = 1/L^2∑_i = 1^L ∑_j = 1^L G_α ,β[ e_i - Q[ e_j,γ]] = 1/L^2∑_i = 1^L ∑_h = 1^H H_hG_α ,β[ e_i - c_h] = 1/Lp̂_α ,β^Q( e_i), where H_h is the number of quantized error samples c_h. And, one can get L = ∑_h = 1^H H_h and ∫p̂_α ,β^Q( e )de = 1. The adjustable threshold γ controls the number of elements in the codebook and thus the computational effort of the algorithm. When α = 2, the QGMEE criterion translates into QMEE criterion <cit.> with the following form: V̂_σ( e) = 1/L∑_i = 1^L p̂_σ( e_i) ≈V̂_σ ^Q( e) = 1/L^2∑_i = 1^L ∑_h = 1^H H_hG_σ[ e_i - c_h] , where G_σ( ·) is Gaussian function with the form of G_σ( e ) = 1 / . -√(2π)σexp[ - ( 1 / . -2σ ^2)e^2], parameter σ represents the bandwidth. From another point of view, a quantizer is a clustering algorithm that classifies the set of errors at a certain distance and the distance is the quantization threshold γ. Predictably, the larger the gamma, the fewer the number H of classes the error set is divided into. §.§ Properties Property 1: When γ=0, one can be obtain that V̂_α ,β( e) = V̂_α ,β^Q( e). In the case of γ=0, the code book is C = {e_1,e_2, ⋯ ,e_L}. According to (<ref>), one can obtain V̂_α ,β( e) = V̂_α ,β^Q( e). Property 2: The proposed cost function V̂_α ,β^Q( e) is bounded, which can be expressed specifically as V̂_α ,β^Q( e) ⩽α/ . -2βΓ( 1/α), with equality if and only if e_1 = e_2 = ⋯ = e_L. From (<ref>) and (<ref>), we can obtain V̂_α ,β^Q( e) = 1/L^2∑_i = 1^L ∑_j = 1^L α/2βΓ( 1/α)exp( - | e_i - Q( e_j,γ)|/β ^α) , since G_α ,β( e ) ⩽α/ . -2βΓ( 1/α) with equality if and only if e = 0. Therefore, one can obtain V̂_α ,β^Q( e) ⩽1/L^2∑_i = 1^L ∑_j = 1^L α/2βΓ( 1/α) = α/2βΓ( 1/α). Property 3: It holds that V̂_α ,β^Q( e) = ∑_h = 1^H a_hp̂( c_h), where a_h = H_h/ . - L, and one can obtain ∑_h = 1^H a_h = 1. It can easily be deduced that V̂_α ,β^Q( e) = ∑_h = 1^H H_h/L( 1/L∑_i = 1^L G_α ,β( e_i - c_h)) = ∑_h = 1^H a_hp̂( c_h) . From property 3, one can obtain that quantized IP is the weighted sum of Parzen's PDF estimator, and this weight is determined by the number H_h in class h. Property 4: The generalized correntropy criterion (GCC) is a special case of the QGMEE criterion. When γ is large enough so that H=1, we can obtain V̂_α ,β^Q( e) = p̂( c_h) from (<ref>). When H=1 and C = { 0 }, for the more special case, (<ref>) can be further written as V̂_α ,β^Q( e) = 1/L∑_i = 1^L G_α ,β( e_i), which is GCC in <cit.>. The GCC measures the local similarity at zero, while the QGMEE criterion measures the average similarity about every c_h. Property 5: When scale parameter β is sufficiently big, we can get V̂_α ,β^Q( e) ≈ α/2βΓ( 1/α) - α/2| β|^α + 1Γ( 1/α)∑_i = 1^L ∑_h = 1^H H_h/L^2| e_i - c_h|^α , where 1 / . - L∑_i = 1^L | e_i - c_h|^α is the α-order moment of error about c_h By using Taylor series, (<ref>) can be rewritten as V̂_α ,β^Q( e) = 1/L^2∑_i = 1^L ∑_h = 1^H H_hG_α ,β( e_i - c_h) = α/2βΓ( 1/α)L^2∑_i = 1^L ∑_h = 1^H ∑_t = 0^∞1/t!H_h( - | e_i - c_h/β|^α)^t , and when β is sufficiently big, we can obtain V̂_α ,β^Q( e) ≈α/2βΓ( 1/α)L^2∑_i = 1^L ∑_h = 1^H H_h[ 1 - 1/| β|^α| e_i - c_h|^α] = α/2βΓ( 1/α) - α/2| β|^α + 1Γ( 1/α)∑_i = 1^L ∑_h = 1^H H_h/L^2| e_i - c_h|^α . Property 6: In the case of regression model f( u) = w^Tu with a vector w of weights that need to be estimated, and w the optimal solution based on the QGMEE criterion is w = N_QGMEE^ - 1M_QGMEE, where M_QGMEE = ∑_i = 1^L ∑_h = 1^H H_hG_α ,β( e_i - c_h)| e_i - c_h|^α - 2( d_i - c_h)u_i and N_QGMEE = ∑_i = 1^L ∑_h = 1^H H_hG_α ,β( e_i - c_h)| e_i - c_h|^α - 2u_iu_i^T. The gradient of the cost function V̂_α ,β^Q( e) with respect to w can be written as ∂V̂_α ,β^Q( e)/∂w = 1/L^2α/β ^α∑_i = 1^L ∑_h = 1^H [ H_hG_α ,β( e_i - c_h)| e_i - c_h|^α - 1 ×sign( e_i - c_h)u_i ] = α/L^2β ^α∑_i = 1^L ∑_h = 1^H [ H_hG_α ,β( e_i - c_h)| e_i - c_h|^α - 2 ×[ ( d_i - c_h) - w_n^Tφ _i]u_i ] = α/L^2β ^α∑_i = 1^L ∑_h = 1^H H_hG_α ,β( e_i - c_h)| e_i - c_h|^α - 2( d_i - c_h)u_i - α/L^2β ^α∑_i = 1^L ∑_h = 1^H H_hG_α ,β( e_i - c_h)| e_i - c_h|^α - 2u_iu_i^Tw . = α/L^2β ^αM_QGMEE - α/L^2β ^αN_QGMEEw. Setting (<ref>) equal to zero, and one can obtain w = N_QGMEE^ - 1M_QGMEE, which proves the property 6. Property 6 is utilized to deal with regression problem in <cit.>, and the QGMEE adaptive filtering is developed. § KERNEL ADAPTIVE FILTERING BASED ON QGMEE The input vector u_n∈𝕌 is considered to be transformed into a hypothesis space 𝕂 by the nonlinear mapping f( ·). The input space 𝕌 is a compact domain of ℝ^M, and the output d_n∈ℝ^1 can be described from d_n = f( u_n) + v_n, where v_n denote the zero-mean noise. RKHS with a Mercer kernel κ( X,Y) will be the learning hypotheses space. The norms used in this study are all l_2-norms. The commonly used Gaussian kernel with bandwidth σ is utilized: κ( X,Y) = G_σ( X - Y) = 1/√(2π)σexp( - 1/2σ ^2X - Y^2). Theoretically, every Mercer kernel will produce a distinct hypothesis feature space <cit.>. As a result, the input data {u_1,u_2, ⋯u_N} will be translated into the feature space as {φ_1,φ_2, ⋯φ_N} (N represents the total number of the input data), rendering it impossible to do a straight calculation. Instead, using the well-known "kernel trick" <cit.>, one can get the inner production from (<ref>): φ _i^Tφ _j = κ( u_i,u_j) = 1/√(2π)σexp( - 1/2σ ^2u_i - u_j^2). The filter output can be expressed as w_n^Tφ_n for each time point n, where w_n is a weight vector in the high-dimensional hypothesis space 𝕂, therefore, the output error can be written separately as e_n = d_n - w_n^Tφ_n. §.§ Kernel recursive QGMEE and QMEE algorithm According to the QGMEE criterion, one can obtain the cost function with the following form: J_QGMEE( w_n) = 1/L^2λ ^i + h∑_i = 1^L ∑_h = 1^H H_hG_α ,β( e_i - c_h) - 1/2ϑ _1w_n^2, where 0 < λ⩽ 1 stands for the exponential forgetting factor. The gradient of (<ref>) with respect to w_n is ∂J_QGMEE( w_n)/∂w_n = - ϑ _1w_n + α/L^2β ^α∑_i = 1^L ∑_h = 1^H ( λ ^i + hH_hG_α ,β( e_i - c_h)| e_i - c_h|^α - 2 ×[ ( d_i - c_h) - w_n^Tφ _i]φ _i ) . By applying a formal transformation to equation (<ref>), (<ref>) can be further written as ∂J_QGMEE( w_n)/∂w_n = α/L^2β ^αΦ_LΛ_Ld_L - α/L^2β ^αΦ_LΛ_LΦ_L^Tw_n - ϑ _1w_n, where {Φ_L = [ [ φ _1 φ _2 ⋯ φ _L ]] = [ [ Φ_L - 1 φ _L ]], d_L = [ [ d_1 - c_h d_2 - c_h ⋯ d_h - c_h ]]^T = [ [ d_L - 1 d_h - c_h ]]^T, Λ_L = [ [ Λ_L - 1 0; 0 θ _L ]], [ Λ_L]_ij = {[ ∑_h = 1^H λ ^i + hH_hG_α ,β( e_i - c_h)| e_i - c_h|^α - 2 ,i = j,; 0,i j, ]. θ _L = ∑_h = 1^H λ ^L + hH_hG_α ,β( e_L - c_h)| e_L - c_h|^α - 2 . . To solve for the extreme values, the gradient of (<ref>) is set to zero, and one can get w_n = ( Φ_LΛ_LΦ_L^T + L^2β ^α/αϑ _1I)^ - 1Φ_LΛ_Ld_L. Utilizing the kernel trick, (<ref>) can be rewritten as w_n = Φ_L( Φ_L^TΦ_L + β ^αϑ _2Λ_L^ - 1)^ - 1d_L with ϑ _2 = L^2ϑ _1/ . -α. Given that the input and weight w_n are observed to combine linearly, one can obtain w_n = A_Ld_L, where A_L = Φ_LQ_L and Q_L = ( Φ_L^TΦ_L + β ^αϑ _2Λ_L^ - 1)^ - 1. Then, we can get the expression for Q_L^ - 1 Q_L^ - 1 = Φ_L^TΦ_L + β ^αϑ _2Λ_L^ - 1 = [ [ Φ_L - 1^T; φ _L^T ]][ [ Φ_L - 1 φ _L ]] + β ^αϑ _2[ [ Λ_L - 1^ - 1 0; 0 θ _L^ - 1 ]] = [ [ Q_L - 1^ - 1 h_L; h_L^T φ _L^Tφ _L + β ^αϑ _2θ _L^ - 1 ]], where h_L = Φ_L - 1^Tφ _L. According to the block matrix inversion, one can obtain that Q_L = [ [ Q_L - 1 + z_Lz_L^Tr_L^ - 1 - z_Lr_L^ - 1; - z_L^Tr_L^ - 1 r_L^ - 1 ]], where z_L = Q_L - 1h_L = Q_L - 1Φ_L - 1^Tφ _L and r_L = φ _L^Tφ _L + β ^αϑ _2θ _L^ - 1 - z_L^Th_L. Substituting (<ref>) and (<ref>) into A_L = Φ_LQ_L, and we can get A_L = [ [ Q_L - 1 + z_Lz_L^Tr_L^ - 1 - z_Lr_L^ - 1; - z_L^Tr_L^ - 1 r_L^ - 1 ]]d_L = [ [ Q_L - 1 + z_Lz_L^Tr_L^ - 1 - z_Lr_L^ - 1; - z_L^Tr_L^ - 1 r_L^ - 1 ]][ [ d_L - 1; d_L - c_h ]] = [ [ A_L - 1 - z_Lr_L^ - 1( d_L - c_h); r_L^ - 1( d_L - c_h) ]]. According to the above-detailed derivation, the pseudo-code of the proposed QKGMEE algorithm is summarised in Algorithm <ref>. When γ=0, the proposed QKRGMEE algorithm translates into KRGMEE algorithm <cit.>. It can be observed that the KRGMEE algorithm is a special form of the QKRGMEE algorithm, and the QKRGMEE algorithm has a much smaller computational burden. It is obvious to infer that the QKRGMEE algorithm will translate into a special algorithm with a QMEE cost function, and the derived algorithm is known to us as QKRMEE. The QKRMEE algorithm and the QKRGMEE algorithm share a similar comprehensive derivation procedure. The QKRMEE derivation method is skipped to cut down on repetition, while Algorithm <ref> provides an overview of its pseudo-code. The QKRGMEE algorithm translates into the proposed QKRMEE algorithm for α=2. When α=2 and γ=0, the proposed QKRGMEE algorithm translates into the KRMEE algorithm. § PERFORMANCE ANALYSIS §.§ Mean error behavior By using the QGMEE criterion, the output of the nonlinear system's desired result is d_n = ( w^o)^Tφ_n + v_n, where w^o represents the unknown parameter and v_n denotes the zero-mean measurement noise. The weight w can also, from <cit.>, be expressed as w_n = w_n - 1 + ( φ_n - Φ_n - 1z_n)r_n^ - 1e_n. Suppose that the weight error definition is ε_n = w^o - w_n, where ε_n is a vector ε_n = [ [ ε _1;n ε _2;n ⋯ ε _m;n ]]^T. Substituting (<ref>) into (<ref>), and we can obtain ε_n = w^o - w_n = ε_n - 1 - ( φ_n - Φ_n - 1z_n)r_n^ - 1e_n. According to (<ref>), one can obtain e_n = φ_n^Tw^o + v_n - φ_n^Tw_n - 1 Substituting (<ref>) into (<ref>), and we can get ε_n = ( I - α_nφ_n^T)ε_n - 1 - α_nv_n, where α_n = ( φ_n - Φ_n - 1z_n)r_n^ - 1. Given that v_n's mean value is zero, the expectation of ε_n can be written as E[ ε_n] = ( I - E[ α_nφ_n^T])E[ ε_n - 1]. The eigenvalue decomposition of E[ α_nφ_n^T] is E[ α_nφ_n^T] = KΩ K^T. K denotes a square matrix composed of eigenvectors whose diagonal elements are eigenvalues. When we set ε = K^Tε, (<ref>) can be further written as E[ ε_n] = ( I - Ω)E[ ε_n - 1]. As a result, W's maximum eigenvalue E[ α_nφ_n^T] is less than one, which implies that E[ ε_n] will eventually converge. §.§ Mean square error behavior As the noise v_n is never correlated with the ε_n - 1, the covariance matrix of E[ ε_nε_n^T] = α_nE[ v_nv_n]α_n^T + ( I - α_nφ_n^T)E[ ε_n - 1ε_n - 1^T]( I - α_nφ_n^T)^T. (<ref>) can be abbreviated as T_n = R_nT_n - 1R_n^T + Ξ_n with {T_n = E[ ε_nε_n^T], R_n = ( I - α_nφ_n^T), Ξ_n = α_nE[ v_nv_n]α_n^T. . Given that, α_nφ_n^T and α_n are variables that are independent of time <cit.>, one can summarize {lim_n →∞T_n = T, lim_n →∞R_n = R, lim_n →∞Ξ_n = Ξ. . As a result, when n →∞, (<ref>) can be written as a real discrete-time Lyapunov equation with the following formula: T = RTR^T + Ξ. From the matrix-vector operator: {vec( SUV) = ( V^T⊗S)vec( U), vec( S + V) = vec( S) + vec( V), . where ⊗ represents the Kronecker product and vec( ·) stands for vectorization operation. The closed-form solution of (<ref>) can be written as vec( T) = ( I - R⊗R)^ - 1vec( Ξ). §.§ Computational Complexity In comparison to the KRMEE and KRGMEE algorithms, the computational complexity of the proposed QKRMEE and QKRGMEE algorithms is examined. By comparing the pseudocode of these algorithms, it can be deduced that the formulas involved in these algorithms have the same form except for the calculation of θ _L;S and θ _L. The method in <cit.> for assessing the computational burden of each algorithm is used to make it easier to compare the computational complexity of different algorithms. The difference between the KRMEE and QKRMEE algorithms' computational complexity can therefore be stated as {C_QKRMEE = C_com + C_θ ;QKRMEE, C_KRMEE = C_com + C_θ ;KRMEE, . where C_KRMEE and C_QKRMEE are the computational complexity of one cycle of the KRMEE and QKRMEE algorithms; C_com represents the computational complexity of formulas of the same form in both algorithms; C_θ ;KRMEE and C_θ ;QKRMEE are the computational complexity of ϕ _L in (17) <cit.> and θ _L;S in (<ref>). The difference C_d;MEE in computational complexity between the KRMEE and QKRMEE algorithms can be expressed as C_d;MEE = C_θ ;KRMEE - C_θ ;QKRMEE. Similarly, the difference C_d;GMEE in computational complexity between the KRGMEE and QKRGMEE algorithms can be expressed as C_d;GMEE = C_θ ;KRGMEE - C_θ ;QKRGMEE, where C_θ ;KRGMEE and C_θ ;QKRGMEE are the computational complexity of ψ _L in (30g) <cit.> and θ _L in (<ref>). The computational complexity of C_θ ;KRMEE, C_θ ;QKRMEE, C_θ ;KRGMEE, and C_θ ;QKRGMEE is shown in Table <ref>. Based on the estimation scheme of the computational complexity in <cit.>, one can obtain that {C_d;MEE≈ 15L - 14 - 16H, C_d;GMEE≈ 19L - 18 - 20H. . The reduced computational burden after quantization by approximating the strategy in <cit.>, (<ref>) can reflect the contribution of the quantification mechanism to a certain extent, it is not completely accurate. From (<ref>), the quantization approach can successfully lessen the computational burden on the KRMEE and KRGMEE algorithms when L is big and L ≪ H. It is worth noting that reducing the computational burden will, to some extent, decrease the steady-state error performance of the suggested algorithms. How to choose the quantization threshold to trade off the performance of the algorithm against the computational complexity is discussed in detail in Section <ref>. § SIMULATIONS To demonstrate the effectiveness of the QKRGMEE and QKRMEE algorithms, we present various simulations, and the MSE is regarded as a tool to measure the algorithm performance in terms of steady-state error. Several noise models covered in this paper are presented before these simulations are implemented, such as mixed-Gaussian noise, Gaussian noise, Rayleigh noise, etc. * The mixed-Gaussian model <cit.> takes the following form: v ∼ς𝒩( a_1,μ _1) + ( 1 - ς)𝒩( a_2,μ _2),0 ⩽ς⩽ 1, where 𝒩( a_1,μ _1) denotes the Gaussian distribution with mean a_1 and variance μ _1, and ς represents the mixture coefficient of two kinds of Gaussian distribution. The mixed-Gaussian distribution can be abbreviated as v ∼ M( ς ,a_1,a_2,μ _1,μ _2). * The Rayleigh distribution's probability density function is written as r( t ) = ( t / . -χ ^2)exp( - t^2/ . -2χ ^2). The noise that follows a Rayleigh distribution is shown as v ∼ R( χ). In this paper, four scenarios are considered, and the distribution of the noise for these four scenarios is R( 3 ), M( 0.95,0,0,0.01,64), 𝒩( 0,0.01), and 0.2R( 3 ) + 0.8M( 0.8,0,0,0.01,64), respectively. §.§ Mackey–Glass time series prediction This subpart tests the QKRMEE and QKRGMEE algorithms' performance to learn nonlinearly using the benchmark data set known as Mackey-Glass (MG) chaotic time series. A nonlinear delay differential equation known as the MG equation has the following form: ds( t )/dt = 0.2s( t - τ)/1 + s^10( t - τ) - 0.1s( t ). By resolving the MG equation, 1000 noise-added training data and 100 test data are produced. In the four aforementioned instances, the performance of the QKRGMEE and QKRMEE algorithms is compared with that of the KRLS <cit.>, KRMC <cit.>, KRMEE <cit.>, and KRGMEE <cit.> algorithms. In Fig. <ref>, the parameters of the algorithms and the MSE convergence curves are displayed, and the regularization factors of KRLS-type adaptive filtering algorithms are all 1. It is evident that the proposed QKRMEE and QKRGMEE algorithms perform marginally worse than the KRMEE and KRGMEE algorithms. §.§ The relationship between parameters and performance In this section, we investigated how the shape parameter α, scale parameter β, length of the Parzen window L, and quantization threshold γ of the QKRGMEE algorithm affected performance on the performance in terms of MSE. Since the QKRMEE algorithm is a special form of the QKRGMEE algorithm, one focuses on the influence of parameters on the performance of the QKRGMEE algorithm. The discussion of how the parameter settings affect the functionality of the QKRGMEE algorithm continues to use the MG chaotic time series. The results reached can also serve as a guide for choosing the QKRGMEE algorithm's parameters. First, the values of these parameters are shown in Fig. <ref> as we investigate the impact of parameter L on the functionality of the QKRGMEE algorithm. The parameter L is set to L = 5,15,20,40,80 in this simulation. The simulation results are shown in Fig. <ref> and Table <ref>, and the distribution of the additive noise is the same as it was in the prior simulation. Fig. <ref> shows the convergence curves of the QKRGMEE algorithm with different L in the first scenario. Table <ref> presents the steady-state MSE with different L and scenarios. Simulations show that the proposed QKRGMEE algorithms' steady-state error lowers as L increases with the four noise categories listed above. In addition, the improvement in terms of performance is not significant when L is greater than 50, thus, it is possible to balance the performance of the algorithm with the amount of computation when L is less than 50. Second, the influence of the quantization threshold on the performance of the algorithm is shown in Fig. <ref> and Table <ref>. Fig. <ref> presents the convergence curve of the MSE with different γ with the presence of Rayleigh noise, and L is set as L=50. Table <ref> shows the MSE, the running time of each iteration, and the number of elements H in the quantized error set with the different γ and scenarios. These KRLS algorithms are measured using MATLAB 2020a, which works on an i5-8400 and a 2.80GHz CPU. Moreover, the KRLS and KRGMEE algorithms are used as benchmarks. From these simulation results, it can be inferred that both the running time and the number H decrease as the quantization threshold increases, while the performance of the algorithm also decreases to some extent; moreover, one can obtain H ≪ L. The suggested range of quantization thresholds is 0.04 ⩽γ⩽ 0.15, which strikes a balance between the QKRGMEE algorithm's efficiency and computing complexity. Final, it is also addressed how the parameters α and β affect the QKRGMEE algorithm's performance. The simulation results are presented in Fig. <ref>, Fig. <ref>, Fig. <ref>, Table <ref>, and Table <ref>. Fig. <ref> and Fig. <ref> show, respectively, the convergence curves of the steady-state MSE of the method with varying α and β in the presence of Rayleigh noise. The settings of the parameters are also shown in the corresponding figures. The influence surfaces of α and β on the steady-state MSE under different noise are presented in Fig. <ref> and Fig. <ref>. Table <ref> and Table <ref> show the pattern of the algorithm's performance with different α and β in different scenarios. From the simulation results, one can obtain that the proposed algorithm works well for values of alpha or beta in the range 0.1 ⩽α⩽ 1.5 or 1 ⩽β⩽ 4 under the given scenarios. §.§ EEG data processing In this part, we use our proposed QKRMEE and QKRGMEE algorithms to handle the real-world EEG data. By putting 64 Ag/AgCl electrodes and the expanded 10-20 system, the EEG data can be obtained from <cit.>. Moreover, the brain data is captured at a sampling rate of 500 Hz. The settings are presented in Fig. <ref>, and the results are displayed in Fig. <ref>. Here, we use a segment of the FP1 channel data as the input. Fig. <ref> displays the convergence curves of QKRGMEE and its competitor, and Fig. <ref> presents the surface of MSE with different α and β. The mean value of H is 7.25 when L=50 and r=0.02, which shows that the quantizer can significantly reduce the computational burden of the algorithm without any significant degradation in the performance of the algorithm. It can be concluded that the performance of the QKRGMEE algorithm is much higher than that of the KRMEE algorithm and is comparable to that of the KRGMEE algorithm, even if the QKRGMEE algorithm has a lower computational complexity. § CONCLUSION In this paper, We further refined the properties of the QGMEE criterion. On this basis, this QGMEE criterion was combined with the KRLS algorithm, and two new KRLS-type algorithms were derived, called QKRMEE and QKRGMEE respectively. QKRMEE algorithm is a special case of the QKRGMEE algorithm in which α=2. Moreover, the mean error behavior, mean square error behavior, and computational complexity of the proposed algorithms are studied. In addition, simulation and real experimental data are utilized to verify the feasibility of the proposed algorithms. § ACKNOWLEDGEMENTS This study was founded by the National Natural Science Foundation of China with Grant 51975107 and Sichuan Science and Technology Major Project No. 2022ZDZX0039, No.2019ZDZX0020, and Sichuan Science and Technology Program No. 2022YFG0343. unsrt [ < g r a p h i c s > ] Jiacheng He received the B.S. degree in mechanical engineering from University of Electronic Science and Technology of China, Chengdu, China, in 2020. He is currently pursuing a Ph.D. degree in the School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China. His current research interests include information-theoretic learning, signal processing, and adaptive filtering. [ < g r a p h i c s > ] Gang Wang received the B.E. degree in Communication Engineering and the Ph.D. degree in Biomedical Engineering from University of Electronic Science and Technology of China, Chengdu, China, in 1999 and 2008, respectively. In 2009, he joined the School of Information and Communication Engineering, University of Electronic Science and Technology of China, China, where he is currently an Associate Professor. His current research interests include signal processing and intelligent systems. [ < g r a p h i c s > ] Kun Zhang received the B.S. degree in electronic and information engineering from Hainan University, Haikou, China, in 2018. He is currently working toward the Ph.D. degree in mechanical engineering with the Mechanical Engineering of the University of Electronic Science and Technology of China, Chengdu, China. His research interests include intelligent manufacturing systems, robotics, and its applications. [ < g r a p h i c s > ] Shan Zhong received the B.E. degree in electrical engineering and automation with University of Technology, Chengdu, China in 2020. He is currently pursuing the M.S. degree in B.E. degree in communication engineering with School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, China. His current research interests include signal processing and target tracking. [ < g r a p h i c s > ] Bei Peng received the B.S. degree in mechanical engineering from Beihang University, Beijing, China, in 1999, and the M.S. and Ph.D. degrees in mechanical engineering from Northwestern University, Evanston, IL, USA, in 2003 and 2008, respectively. He is currently a Full Professor of Mechanical Engineering with the University of Electronic Science and Technology of China, Chengdu, China. He holds 30 authorized patents. He has served as a PI or a CoPI for more than ten research projects, including the National Science Foundation of China. His research interests mainly include intelligent manufacturing systems, robotics, and its applications. [ < g r a p h i c s > ] Min Li obtained a doctoral degree in Mechanical Engineering from Chongqing University, Chongqing, China, in 2012.He is currently an assistant professor in the School of Mechanical Engineering with Southwest Jiaotong University, China. In recent years, he has presided over one major research and development project of the Sichuan Provincial Department of Science and Technology, one Chunhui Program of the Ministry of Education, and four horizontal projects for enterprises. He has won one first prize of the Ministry of Education Technology Invention Award.
http://arxiv.org/abs/2307.01136v1
20230703161558
Living With a Red Dwarf: The Rotation-Age Relationship of M Dwarfs
[ "Scott G. Engle", "Edward F. Guinan" ]
astro-ph.SR
[ "astro-ph.SR", "astro-ph.EP" ]
0000-0001-9296-3477]Scott G. Engle 0000-0002-4263-2650]Edward F. Guinan Villanova University Dept. of Astrophysics and Planetary Science 800 E. Lancaster Ave Villanova, PA 19085, USA Age is a fundamental stellar property, yet for many stars it is difficult to reliably determine. For M dwarfs it has been notoriously so. Due to their lower masses, core hydrogen fusion proceeds at a much slower rate in M dwarfs than it does in more massive stars like the Sun. As a consequence, more customary age determination methods (e.g. isochrones and asteroseismology) are unreliable for M dwarfs. As these methods are unavailable, many have searched for reliable alternatives. M dwarfs comprise the overwhelming majority of the nearby stellar inventory, which makes the determination of their fundamental parameters even more important. Further, an ever-increasing number of exoplanets are being found to orbit M dwarfs and recent studies have suggested they may relatively higher number of low-mass planets than other spectral types. Determining the ages of M dwarfs then allows us to better study any hosted exoplanets, as well. Fortunately, M dwarfs possess magnetic activity and stellar winds like other cool dwarf stars. This causes them to undergo the spindown effect (rotate with longer periods) as they age. For this reason, stellar rotation rate has been considered a potentially powerful age determination parameter for over 50 years. Calibrating reliable age-rotation relationships for M dwarfs has been a lengthy process, but here we present the age-rotation relationships for ∼M0–6.5 dwarfs, determined as part of the Living with a Red Dwarf program. These relationships should prove invaluable for a wide range of stellar astrophysics and exoplanetary science applications. § INTRODUCTION & BACKGROUND: STUDYING M DWARFS Main sequence (dwarf) M stars (dM stars; red dwarfs; referred to as M dwarfs hereafter) represent the cool, low mass, low luminosity end of the main sequence, and comprise ∼75% of all stars in the solar neighborhood <cit.>. This study specifically focuses on M0 V – ∼M6.5 V stars, with properties ranging from: Mass ≈ 0.6 – 0.1 M_⊙; Radius ≈ 0.6 – 0.1 R_⊙; Luminosity ≈ 0.06 – 0.001 L_⊙ and temperatures T_ eff = 3900 – 2850 K[<https://www.pas.rochester.edu/ emamajek/EEM_dwarf_UBVIJHK_colors_Teff.txt>]. M dwarfs have received substantial attention during the 2000’s, prompted in part by the discovery that these numerous stars host a relatively large number of terrestrial-size planets <cit.> when compared to stars of higher mass. Aside from the large number of nearby M dwarfs available for study, they also make very attractive targets for terrestrial planet searches and research programs as such planets are more readily detected through radial velocity motions and planetary transits due to the low masses and small radii of the M dwarf host stars. Estimates of the frequency of potentially habitable planets (PHP) hosted by M dwarfs have been made primarily from Kepler Mission data, but also from numerous radial velocity studies. Conservative estimates place the planetary frequency around 15% <cit.> and studies including expanded circumstellar Habitable Zone (HZ) estimates indicate higher frequencies of ∼30–40% <cit.>. If a slightly conservative `middle ground' of 25% is adopted, it implies that within 10 pc (∼33 ly) of the Sun (a volume of space containing ∼240 M dwarfs), there should be ∼60 potentially habitable Earth-size planets. Extrapolating to include the entire Milky Way raises the possibility that billions of Earth-size planets are orbiting within the habitable zones of M dwarfs. M dwarfs are equally fascinating targets for stellar astrophysics. Relative to their sizes, they display enhanced magnetic dynamo activity due to the extent of their interior convective zones. As a result, their coronal and chromospheric emissions are also relatively strong, compared to their bolometric luminosities. They have comparatively slow core nuclear reaction rates, however, which makes them rather `fuel efficient' and results in their long main-sequence lifetimes. More massive M dwarfs can live on the main sequence for over 100 Gyr while those of lower mass (M < 0.2 M_⊙) can live as long as ∼1 trillion (∼10^12) years <cit.>. Due to this, no M dwarfs have yet to evolve off the main sequence. Another consequence of their long lifetimes, however, is that once M dwarfs reach the core hydrogen-fusing main-sequence their basic physical properties (L, T_ eff, R) remain essentially constant over cosmological time scales (i.e., ∼14 Gyr). Their large numbers, longevities, and near-constant main sequence luminosities make M dwarfs very compelling targets for programs searching for life in the universe since, unlike our Sun, the HZs and thus exoplanet bolometric irradiances (and planetary instellations) remain stable for tens of Gyrs or longer. However, the stars' very slow nuclear evolution makes determining accurate stellar ages extremely challenging <cit.>. Fortunately, it has been known for 50 years <cit.> that cool dwarfs undergo a `spindown effect' whereby their rotation periods lengthen as they age. Since that time, numerous studies have shown the potential that stellar rotation holds as an age determinant – the method known as “gyrochronology” <cit.>. Late-F, G, K, and M dwarfs have stellar winds and magnetic fields which act in tandem to propagate the spin down effect. The winds of these stars are magnetically-threaded, continually carrying small amounts of each star's mass out into space while it is still (over a certain distance) tethered to the star itself by the magnetic field. The mass eventually escapes the magnetic field entirely, but its magnetically-threaded tenure has already caused a slowing of the star's rotation due to conservation of angular momentum <cit.>. This spindown effect allows magnetic activity levels (observed through such proxies as X-ray and UV [X–UV] emissions, and several emission features known to exist within optical spectra) to serve as additional age determinants for cool dwarfs. Activity-age relationships have been also been constructed for M dwarfs, and they will be detailed in a follow-up paper. The largest difficulty resided in building a representative sample of M dwarfs with a wide range of previously known ages and then determining their rotation periods. In this paper we present these `benchmark' objects and the rotation-age relationships of M dwarfs determined as part of the Living with a Red Dwarf (LivRed) Program. § DATING M DWARFS: DETERMINING AGES FOR (MOSTLY) AGELESS STARS Age, along with mass and composition, is one of three key factors governing a star's current state <cit.>, yet it is also one of the most difficult stellar parameters to accurately measure. As mentioned, this is particularly true with M dwarfs, for which other commonly applied methods (e.g., isochronal, asteroseismic) for aging a star are unreliable <cit.>. Observables, such as rotation period and X-UV activity level, are known to be age-dependent and are often related to each other, but relating either quantity to stellar age first requires a set of M dwarfs with known ages – a benchmark sample. With no currently available methods for directly determining the ages of single, isolated M dwarfs, the sample of benchmark M dwarfs has instead been built using age by association. Each benchmark either has a stellar companion or belongs to a larger group or population of stars within the galaxy. For each pairing or grouping of stars, it is the age of the companion star or the group that can also be applied to the M dwarf since they are assumed to have formed at the same time. The age by association method in this study can be divided into three categories. For young dwarfs with ages below ∼2 Gyr there are several well-studied `stellar groups' (referred to as either moving groups, clusters, or associations) available. The ages are very reliable, but sadly do not cover nearly the range that we need. There are a limited number of additional clusters with greater ages, but other issues exist. For example, in the clusters NGC 752, and Ruprecht 147 <cit.> rotations periods have only been measured for earlier M dwarfs. A small number of HR 1614 moving group members (age ∼2.0 Gyr – [60]) with rotation periods were once used, but the coherence of the moving group itself has recently been called into question <cit.> which prompted their removal as benchmarks. Traditionally, the distances of highly prized targets such as M67 and NGC 188 (ages 4 Gyr) prevented sufficient time-series photometry any their faint M dwarf members. However, <cit.> have recently measured rotation rates within this cluster for stars as late as ∼M3, showing some of the exciting recent progress in cluster gyrochronology measures. Ages can also be assigned to stars that are members of specific galactic populations. Additional benchmarks were selected which belong to either the Thick Disk or Halo populations (ages of ∼8–11 and ∼10–12.5 Gyr) of the Milky Way, based primarily on the star's UVW galactic space motions <cit.>, with further support of membership from metallicity values and velocity dispersions <cit.>. The advanced ages of these populations make them important benchmarks, but more direct age estimates for individual M dwarfs, as opposed to statistically-supported, kinematically-inferred ages, would usually be preferred. The final and very welcome source of benchmarks is M dwarfs that belong to common proper motion (CPM) pairs/systems with an age-determinable companion. If the companion is a more massive (F–G dwarf) star, then a reliable age can be determined by other, more common (e.g., isochronal and/or asteroseismic) methods and applied to the (assumed to be coeval) M dwarf. Systems with white dwarf (WD) companions have become increasingly useful due to advances in determining the WD progenitor star properties <cit.> that have resulted in increasingly reliable ages. It is always important to note that the separation of the M dwarf from its companion is assumed to have prevented past interactions, allowing the M dwarf to evolve as if it were a single, isolated star. Though, for specific pairs (particularly those with small separations is small), the possibility of past interactions may exist (see ). However, a particular benefit these systems have over the previously mentioned CPM pairs is that the WDs do not outshine their M dwarf companions, which facilitates CCD photometry of the M dwarfs to search for rotation periods. These systems provided several M dwarf targets with ages older than 2 Gyr: an age-range that was long-awaiting additional targets. Determining rotation periods for these older M dwarfs became a primary focus of the program. For this study, multiple (if available) measures of each WD companion's effective temperature (T_ eff) and surface gravity (log g) were gathered from the recent literature, and mean values and uncertainties were determined via χ^2 analysis. With these values, updated ages and uncertainties were calculated using both the package, written by Dr. Sihao Cheng, and the package, written by Dr. Rocio Kiman <cit.>. Both incorporate the latest WD cooling models available <cit.> and the initial-final mass relationship (IFMR) of <cit.>. We note that this is not the only IFMR choice available within the package, but it is the one we selected for consistency with . § STARING AT M DWARFS: DETERMINING ROTATION PERIODS The surface features (e.g. starspots) of cool, main sequence stars will be brought in and out of view as the stars rotate, if the orientation of the star (inclination of the star's rotation axis relative to our line sight) and the star spot surface distribution are favorable. Repeatedly measuring stellar brightness via photometry can determine the rotation periods by revealing cyclical changes in brightness over time. This was the preferred method for determining benchmark rotation periods, as it is precise and works for very long rotation periods where spectroscopic measures of rotation velocity become ambiguous. Measuring rotation via photometry is a very straightforward process on paper, but in practice substantial difficulties can arise when dealing with M dwarfs. As the Living with a Red Dwarf program was designed to study the crucial missing age-range of >3 Gyr, it was unknown at the outset, but this would mean measuring rotation periods anywhere from ∼30 days to as long as ∼150–170 days. Such extended rotation periods require rather lengthy observing campaigns. As light amplitudes can be below ∼0.015 mag, the photometry requires a sufficiently high precision as well. Further, successfully detecting a rotation signal depends on the star maintaining a `favorable' (non-uniform) distribution of starspots, possibly for several years. Fortunately, many of the M dwarfs observed in this program have displayed a persistence of surface features – in some cases for several years. Though most of our targets are significantly older, our results thus far align with those of <cit.>, which studied 4 young, rapidly rotating M dwarfs. Apart from cluster members, whose rotation periods were obtained from the literature (see Tables <ref> and <ref>), the vast majority of benchmark rotation periods were determined through dedicated CCD photometry of the targets carried out with the 1.3 meter Robotically Controlled Telescope <cit.> at Kitt Peak National Observatory in Arizona. In limited cases, data (or additional data) were obtained using other telescopes (e.g., the 0.8 meter Automated Photoelectric Telescope (APT) – see ) or publicly available surveys, either due to target visibility or to help confirm the rotation period. Except for the faintest sources, photometry was carried out in the V-band, with individual measurement uncertainties of typically ∼0.004–0.01 mag depending on the brightness of the star. Data would be removed prior to analysis for reasons ranging from legitimate hardware malfunctions, to poor sky quality conditions, even down to a large moth having taken a poorly timed stroll through the telescope's light path. For RCT / APT targets, 3 – 5 measures were obtained per night, from which nightly means and uncertainties were determined. The yearly and multi-year data sets were searched for periodic variations with the Generalized Lomb-Scargle and CLEANest algorithms, (as implemented within <cit.> and (v3) <cit.>) an example of which is shown in Fig. <ref>. All rotation signals have false alarm probabilities below 1%. In limited cases, additional or alternative sources of photometry were used to determine rotation periods. Proxima Cen and Kapteyn's Star have been a part of the program for several years, but are southern hemisphere targets and thus inaccessible to the RCT. Observing time on the Skynet Robotic Telescope Network was purchased, and CCD photometry was carried out using the network's PROMPT telescopes at the Cerro Tololo Inter-American Observatory in Chile and the Meckering Observatory in Australia. Images were automatically reduced before downloading, and nightly means were determined and analyzed in similar fashion to RCT / APT data. The remaining cases consist of targets that were added to the program only recently. Due to the installation and testing of a new and upgraded camera system, and the Contreras wildfire, the RCT experienced an observing hiatus. With a good combination of photometric depth, precision, and timeline, Zwicky Transient Facility data were used as an alternative. Individual measures were downloaded from the NASA/IPAC Data Archive, excluding any measures the archive had flagged. The resulting data were then sigma-clipped (also within ) before nightly means were constructed. Rotational light curves of the benchmark M dwarfs are shown in Fig. <ref> to give an idea of the amplitudes of variability observed and of the data quality. M dwarf rotation amplitudes can range from ∼0.01 mag or less in the “toughest” cases to as much as 0.06 mag for ideal cases such as Proxima Cen. § 4. RESULTS & DISCUSSION – A TALE OF TWO RELATIONSHIPS The primary focus of the Living with a Red Dwarf (LivRed) program has been to characterize the evolution of M dwarf rotation rates over their lifetimes, with the end goal of providing a reliable method for calculating the age of an M dwarf, so long as its rotation period has been determined. When comparing related, age-associated quantities (e.g., activity and rotation) the data is commonly linearized by taking the logarithms of both quantities. In our analysis of M dwarf age vs rotation, we found that both subsets of M dwarfs (but particularly the mid-late subset) showed deviations from linearity in log-log space, and a more straightforward analysis of their rotations over time could be carried out in semi-log space (see Fig <ref>) while clearly showing the inflection points on the evolutionary tracks of both early and mid-late M dwarfs that will be discussed later in this section. As mentioned previously, constructing these relationships proved a rather complicated task, but not simply due to the difficulty in building a substantial set of benchmark targets and the observational burden of measuring their rotation periods. When it comes to spindown, it appears there is no way to broadly classify all M dwarfs; they represent too wide a range of parameters. lllll The `Early' M Dwarf (M0–2) Benchmarks Name Sp. Type Age (Gyr) P_rot (days) Age determined via Pleiades M0-2V 0.125 [+0.05, -0.05] 2.05 [+2.96, -1.64]^1 Cluster NGC 2516 M2.5-6V 0.15 [+0.02, -0.02] 1.80 [+2.93, -1.50]^1 Cluster M34 M2.5-6V 0.22 [+0.02, -0.02] 6.85 [+6.51, -2.83]^1 Cluster NGC 3532 M0-2V 0.3 [+0.05, -0.05] 10.96 [+6.09, -9.26]^2 Cluster M37 M0-2V 0.52 [+0.06, -0.06] 13.15 [+2.71, -9.03]^1 Cluster Praesepe / Hyades M0-2V 0.73 [+0.12, -0.12] 16.06 [+2.48, -2.91]^3 Cluster NGC 6811 M0-2V 1.0 [+0.2, -0.2] 12.6 [+1.74, -1.17]^4 Cluster NGC 752 M0-2V 1.46 [+0.18, -0.18] 16.12 [+8.87, -13.83]^5 Cluster LP 856-54 M1-1.5V 2.52 [+1.58, -0.62] 23.27 [+0.04, -0.04] WD comp (LP 856-53) Ruprecht 147 M0-2V 2.6 [+0.4, -0.4] 22.4 [+2.9, -2.9]^6,7 Cluster G 59-39 M0V 3.20 [+1.95, -0.93] 33.82 [+0.07, -0.07] WD comp (EGGR 92) GJ 2131 A M1V 3.47 [+1.69, -0.33] 34.82 [+0.1, -0.1] WD comp (GJ 2131 B) LP 775-52 M0-1V 3.62 [+1.84, -0.34] 39.05 [+0.14, -0.14] WD comp (LP 775-53) G 111-72 M1.5-2V 3.64 [+1.35, -0.68] 38.96 [+0.13, -0.13] WD comp (G 111-71) HIP 43232 B M1.5V 3.95 [+0.35, -0.35]^8 41.3 [+4.1, -4.1]^8 MS comp (HIP 43232 A) LP 587-54 M1.5-2V 6.34 [+1.01, -0.94] 54.3 [+0.5, -0.5] WD comp (LP 587-53) GJ 366 M1.5V 9.5 [+1.5, -1.5] 87.1 [+1.4, -1.4] Thick Disk/Halo Population GJ 328 M0V 9.5 [+1.5, -1.5] 80.5 [+1.2, -1.2] Thick Disk/Halo Population GJ 821 M1V 9.5 [+1.5, -1.5] 86.2 [+0.6, -0.6] Thick Disk/Halo Population LP 552-48 M0V 9.88 [+1.77, -1.07] 92.9 [+1.3, -1.3] WD comp (LP 552-49) LHS 343 sdK7 11.5 [+1.0, -1.5] 89.2 [+1.7, -1.7] Halo Population LHS 173 esdK7^9 11.5 [+1.0, -1.5] 85.3 [+3.4, -3.4] Halo Population LHS 174 sdM0^9 11.5 [+1.0, -1.5] 90.4 [+1.5, -1.5 ] Halo Population ^1<cit.> ^2<cit.> ^3<cit.> ^4<cit.> ^5<cit.> ^6,7<cit.> ^8Sawczynec, E. (2021), Thesis available at <https://www.phys.hawaii.edu/wp-content/uploads/2021/06/ESawczynec_Thesis-1.pdf> ^9<cit.> lllll The `Mid' M Dwarf (M2.5–6.5) Benchmarks Name Sp. Type Age (Gyr) P_rot (days) Age determined via Pleiades M2.5–∼6.5V^1 0.125 [+0.05, -0.05] 0.59 [+0.65, -0.28]^2 Cluster NGC 2516 M2.5-6V 0.15 [+0.02, -0.02] 0.68 [+0.77, -0.30]^2 Cluster M34 M2.5-6V 0.22 [+0.02, -0.02] 3.03 [+3.58, -2.10]^2 Cluster M37 M2.5-6V 0.52 [+0.06, -0.06] 7.66 [+5.61, -5.61]^2 Cluster Praesepe / Hyades M2.5-∼6.5V^1 0.73 [+0.12, -0.12] 23.8 [+3.3, -3.3]^3 Cluster NGC 752 M2.5-6V 1.46 [+0.18, -0.18] 14.4 [+9.4, -4.7]^4 Cluster NSV 11919 M2.5-3V 1.97 [+0.98, -0.35] 21.07 [+0.05, -0.05] WD comp (NSV 11920) Gaia DR3 7552^5a M3-3.5V 1.98 [+1.01, -0.39] 20.6 [+1.1, -1.1] WD comp (Gaia DR3 4784^5b) G 148-6 M3-3.5V 2.20 [+1.35, -0.57] 22.7 [+0.05, -0.05] WD comp (G 148-7) Ruprecht 147 M2.5-4V 2.6 [+0.4, -0.4] 25.3 [+6.9, -2.9] Cluster LP 783-2 M6.5V 3.12 [+1.73, -0.56] 25.45 [+0.06, -0.06] WD comp (LP 783-3) G 130-6 M3V 3.60 [+1.84, -1.63] 49.5 [+0.1, -0.1] WD comp (G 130-5) LP 498-25 M2.5V 3.80 [+0.70, -1.58] 49.0 [+0.2, -0.2] WD comp (LP 498-26) LHS 353 M4V 4.42 [+2.13, -1.52] 61.2 [+2.1, -2.1] WD comp (GJ 515) 40 Eri C M4.5V 4.89 [+4.81, -2.71] 81.0 [+0.7, -0.7] WD comp (40 Eri B) G 87-28 M4V 4.91 [+3.72, -2.65] 91.1 [+0.7, -0.7] WD comp (G 87-29) G 201-40 M3-3.5V 5.16 [+3.77, -2.86] 77.6 [+0.9, -0.9] WD comp (G 201-39) Proxima Cen M5.5V 5.3 [+0.7, -0.7] 86.4 [+2.5, -2.5] α Cen system LSPM J1604+3909W M4V 6.9 [+0.9, -0.9] 127.9 [+2.8, -2.8] MS comp (HD 144579) LHS 229 M4V 8.71 [+0.56, -0.46] 152.3[+1.7, -1.7] WD comp (LHS 230) GJ 299 M4.5V 9.5 [+1.5, -1.5] 156.9 [+1.2, -1.2] Thick Disk/Halo Population GJ 213 M4V 9.5 [+1.5, -1.5] 158.6 [+2.0, -2.0] Thick Disk/Halo Population GJ 157.1 M4V 9.5 [+1.5, -1.5] 142.7 [+3.0, -3.0] Thick Disk/Halo Population Barnard’s Star (GJ 699) M4V 9.5 [+1.5, -1.5] 149.5 [+0.6, -0.6] Thick Disk/Halo Population GJ 273 M3.5V 9.5 [+1.5, -1.5] 160.8 [+2.5, -2.5] Thick Disk/Halo Population GJ 581 M3V 9.5 [+1.5, -1.5] 148.1 [+0.9, -0.9] Thick Disk/Halo Population LHS 1802 M5V 9.97 [+3.53, -5.12] 150.4 [+2.4, -2.4] WD comp (LHS 1801) LHS 182 usdM0 11.5 [+1.0, -1.5] 161.4 [+4.3, -4.3] Halo Population GJ 1062 sdM2.5 11.5 [+1.0, -1.5] 159.5 [+2.1, -3.4] Halo Population LHS 3382 esdM2.5 11.5 [+1.0, -1.5] 169.2 [+9.3, -3.7] Halo Population LHS 216 esdM3^6 11.5 [+1.0, -1.5] 174.4 [+2.8, -2.8] Halo Population Kapteyn's Star (GJ 191) sdM1.5^6 11.5 [+1.0, -1.5] 153.2 [+3.7, -3.7 ] Halo Population ^1Depending on the stellar parameter used, rotation periods in these clusters were measured for objects as late as either M6 or M6.5. ^2<cit.> ^3<cit.> ^4<cit.> ^5aFull Name: Gaia DR3 5172481276951287552 ^5bFull Name: Gaia DR3 5172481203936294784 ^6Classified using <cit.> and <cit.> llllll Parameters Used to Derive Ages for the White Dwarf Companions WD Name WD Type Model log g T_eff Source(s) LP 856-53 DA5 H Thick 8.12 [+0.08, -0.08] 9903 [+105, -105] ^6, ^17, ^18, ^19 GJ 2131 B DA3.9 H Thick 7.98 [+0.03, -0.03] 12615 [+420, -420] ^6, ^9, ^17 G 111-71 DA6.5 H Thick 8.01 [+0.08, -0.08] 7560 [+23, -24] ^4, ^6, ^9, ^18 LP 775-53 DA H Thick 8.085 [+0.125, -0.125] 6587 [+100, -100] ^6, ^18 EGGR 92 DA4 H Thick 8.01 [+0.04, -0.04] 10590 [+65, -65] ^3, ^6, ^18 LP 587-53 DA8.6 H Thick 7.96 [+0.03, -0.02] 5782 [+73, -82] ^3, ^4, ^6 LP 552-49 DC H Thin 8.00 [+0.25, -0.25] 4460 [+106, -106] ^4, ^13 NSV 11920 DBZ5 H Thin 8.105 [+0.02, -0.02] 11070 [+96, -96] ^2, ^7, ^11,^16, ^28, ^29 Gaia DR3 4784 non-DA H Thin 8.15 [-0.05, -0.05] 9960 [+112, -112] ^28, ^29 G 148-7 DA3.1 H Thick 8.01 [-0.02, -0.02] 15840 [+329, -329] ^3, ^6, ^9, ^13, ^18, ^26, ^27 LP 783-3 DZ6.5 H Thin 8.10 [+0.03, -0.03] 7924 [+97, -97] ^6, ^11, ^16, ^21 G 130-5 DA4 H Thick 7.99 [+0.04, -0.04] 12838 [+180, -180] ^3, ^6, ^9, ^13, ^18 LP 498-26 DB3 H Thin 8.02 [+0.05, -0.05] 15405 [+232, -232] ^5, ^6, ^7 GJ 515 DA4 H Thick 7.94 [+0.04, -0.04] 14405 [+200, -200] ^9, ^17, ^19, ^21, ^28 LP 672-1 DA3.1 H Thick 7.94 [+0.03, -0.03] 15742 [+423, -423] ^6, ^17, ^18, ^19, ^21 LHS 27 DC7.1 H Thin 8.14 [+0.03, -0.03] 7060 [+205, -205] ^1, ^13, ^16, ^25 40 Eri B DA2.9 H Thin 7.94 [+0.04, -0.04] 16979 [+424, -424] ^8, ^9, ^20 G 201-39 DA5.6 H Thick 7.97 [+0.06, -0.05] 9007 [+69, -70] ^3, ^6, ^10, ^17, ^18 G 87-29 DQ8 H Thin 8.00 [+0.12, -0.12] 6674 [+360, -360] ^4, ^6, ^13, ^24, ^25 LHS 230 DA+DA H Thick 8.10 [+0.03, -0.27] 4926 [+255, -255] ^12 LHS 1801 DA H Thick 7.92 [+0.07, -0.06] 5145 [+88, -89] ^4, ^6, ^13 ^1<cit.> ^2<cit.> ^3<cit.> ^4<cit.> ^5<cit.> ^6<cit.> ^7<cit.> ^8<cit.> ^9<cit.> ^10<cit.> ^11<cit.> ^12<cit.> ^13<cit.> ^14<cit.> ^15<cit.> ^16<cit.> ^17<cit.> ^18<cit.> ^19<cit.> ^20<cit.> ^21<cit.> ^22<cit.> ^23<cit.> ^24<cit.> ^25<cit.> ^26<cit.> ^27<cit.> ^28<cit.> ^29<cit.> llllll Rotation-based Age Determinations for Exoplanet-Hosting M Dwarfs Star Name Age (Gyr) Age err^1 Relationship Used Note^2 USco1621 A 0.17 0.01 mid-late HATS-74 A 0.18 0.01 early AU Mic 0.18 0.01 early COCONUTS-2 A 0.19 0.01 mid-late USco1556 A 0.24 0.01 mid-late K2-284 0.32 0.02 early TOI-620 0.33 0.02 early K2-104 0.34 0.02 early K2-240 0.42 0.02 early Kepler-1512 0.47 0.01 mid-late GJ 463 0.67 0.03 early Kepler-1410 0.68 0.05 early TOI-540 0.72 0.02 M4+ EPIC 211822797 0.73 0.07 early TRAPPIST-1 0.75 0.02 M4+ TOI-1227 0.76 0.02 M4+ 2MASS J04372171+2651014 0.77 0.02 M4+ K2-25 0.77 0.02 M4+ GJ 9066 0.77 0.02 M4+ HATS-76 0.79 0.05 early Kepler-45 0.87 0.05 early K2-415 0.89 0.03 M4+ GJ 338 B 0.97 0.06 early 1 Kepler-1229 1.13 0.09 early Kepler-1455 1.24 0.10 early K2-345 1.27 0.15 early TOI-1685 1.44 0.07 mid-late Gl 49 1.46 0.09 early 1 Kepler-395 1.56 0.14 early Kepler-705 1.6 0.14 early TOI-1201 1.95 0.23 mid-late GJ 685 2.06 0.49 early GJ 3470 2.08 0.15 early K2-264 2.36 0.19 early TOI-3714 2.54 0.18 early K2-286 2.72 0.65 early K2-95 2.83 0.40 mid-late Kepler-155 2.96 0.27 early GJ 740 3.03 0.27 early GJ 514 3.05 0.31 early G 9-40 3.09 0.15 mid-late K2-332 3.14 0.16 mid-late GJ 96 3.16 0.32 early L 168-9 3.17 0.32 early GJ 3323 3.18 0.16 mid-late 1 HD 147379 3.25 0.65 early Kepler-1652 3.26 0.33 early K2-18 3.35 0.20 mid-late LP 714-47 3.39 0.37 early GJ 3293 3.43 0.21 mid-late HATS-71 3.44 0.21 mid-late LSPM J2116+0234 3.46 0.24 mid-late GJ 393 3.47 0.38 early TOI-1468 3.48 0.24 mid-late TOI-776 3.48 0.38 early HATS-75 3.53 0.39 early TOI-1759 3.57 0.39 early GJ 720 A 3.61 0.40 early HD 260655 3.72 0.45 early Kepler-560 3.73 0.30 mid-late Gl 686 3.78 0.45 early TOI-700 3.85 0.31 mid-late Kepler-235 3.86 0.46 early TYC 2187-512-1 3.91 0.47 early GJ 9689 3.91 0.47 early K2-3 3.94 0.47 early GJ 3138 4.11 0.53 early GJ 1252 4.23 0.42 mid-late LHS 1678 4.23 0.55 mid-late KOI-4777 4.24 0.59 early GJ 4276 4.25 0.43 mid-late TOI-1235 4.31 0.60 early GJ 1265 4.47 0.49 mid-late GJ 436 4.53 0.50 mid-late 1 TOI-122 4.55 0.64 mid-late TOI-1695 4.57 0.69 early LHS 1815 4.58 0.69 early TOI-2136 4.68 0.56 mid-late GJ 536 4.78 0.72 early L 98-59 4.81 0.63 mid-late HD 180617 4.85 0.78 early GJ 625 4.91 0.79 early 1 TOI-674 4.99 0.85 early YZ Cet 5.04 0.71 mid-late 1 GJ 3512 5.22 0.73 mid-late Proxima Cen 5.32 0.74 mid-late 1 GJ 411 5.43 0.92 early 1 GJ 3779 5.62 0.84 mid-late Wolf 1061 5.62 0.84 mid-late 1 GJ 367 5.65 1.07 early G 264-012 5.89 0.94 mid-late TOI-237 6.00 1.14 mid-late LTT 3780 6.11 1.10 mid-late GJ 3929 7.21 1.51 mid-late GJ 251 7.21 1.44 mid-late GJ 1132 7.23 1.45 mid-late Ross 128 7.28 1.46 mid-late GJ 1214 7.4 1.48 mid-late GJ 1002 7.48 1.57 mid-late CD Cet 7.5 1.58 mid-late GJ 486 7.76 1.63 mid-late LHS 1140 7.83 1.64 mid-late TOI-1634 8.32 2.83 early GJ 1151 8.51 1.96 mid-late Wolf 1069 10.23 2.76 mid-late GJ 273 10.3 2.78 mid-late 1 GJ 3473 11.04 3.09 mid-late HD 238090 12.45 3.86 early ^1As explained in the text, uncertainties along the younger branch (Age ≲ 2.9 Gyr) do not account for the full scatter of rotation rates in clusters, and are therefor underestimated. Additionally, some rotation rates from the Exoplanet Archive did not have uncertainties. ^2 A Note = 1 indicates that the rotation rate was updated to match the value presented either elsewhere in this paper, or in the follow-up paper. If you group together G dwarfs, the masses can vary by ∼10%. Grouping together all K dwarfs, the mass can vary by ∼30 – 40%. But studying M dwarfs, even focusing only on M0 – 6.5 dwarfs as we have done here, the mass can vary by >500%. Important changes occur within the stars' interior structures and evolutionary timelines, which can be observed (and need to be accounted for) in the rotation relationships. One dramatic difference was suspected early in the study, but required lengthy follow-up. Preliminary project results showed the rotations of (especially older) early vs. mid-late M dwarfs followed divergent evolutionary paths <cit.> as they aged. This splitting of M dwarf subsets into distinct rotation-based groups has been observed in other studies, as well (see ). Also particularly relevant to this study; models have shown (Louis Amard, private communication) that the oldest stars (subdwarf members of the Halo population) display an interesting and related phenomenon where their interiors are structured as that of a main sequence star with slightly later spectral type (see Tables <ref> and <ref>. An example is Kapteyn's Star, which would initially be considered a member of our early subset since it is classified as sdM1.5, yet models indicate it has a fully convective interior similar to a ∼M2.5 or later main sequence star. A potential explanation for subdwarfs having deeper convective zones than their spectral types would indicate is that their smaller radii lead to larger interior temperature gradients, but confirming the true cause requires further study. By an age of 10 Gyr, the average mid-late M dwarf will have a rotation period almost twice as long as the average early M dwarf (∼155 vs ∼85 days – see Figs <ref> & <ref>). These different paths resulted in our first subdivision of the M dwarfs, into what we call the `early' (M0 – 2) and `mid-late' (M2.5 – 6.5) groups. This is near to, but earlier than, the usual spectral type of M3 – 3.5 which is routinely quoted as the transition point to a fully convective interior. <cit.> recently showed, however, that changes due to magnetic effects (which would be rotation-related) likely occur within the M2.1 – 2.3 range, which is encouraging in light of our rotation period results. The decision to not include stars with spectral types later than ∼M6.5 V is intentional. First, we presently do not have a sufficient sample of older stars at these later spectral types with both well-determined ages and rotation periods. Second, from the small sample of such stars that is available, it appears they either experience no appreciable spindown effect, or one that is altogether different from any of the other M dwarf subsets presented here. A well-known example is the M8 V star Trappist-1, that has an age of 7.6 ± 2.2 Gyr and a rotation period of 3.39 days (e.g., ). There is an additional complication at young ages, due to the length of time required for each star to reach the main sequence. As with other issues, this one is also particularly difficult for M dwarfs, whose pre-main sequence lifetimes can range from ∼140 Myr to ∼1.5 Gyr (M0 – 6.5 dwarfs ). Due to this, on top of the spread of initial rotation rates normally displayed by all cool dwarf stars, younger M dwarfs show a larger range of rotation periods, as detailed in several excellent studies (see for recent examples). After an age of ∼3 Gyr, all mid-late M dwarfs have converged onto a single evolutionary track, and any differences between their rotation-determined ages are negligible compared to the uncertainties of the relationships. At young ages, though, this group appears to require further subdivision, most likely as a result of lengthening pre-main sequence lifetimes. Due to the distances of some of these young clusters, this aspect of the study is in need of further study, but M dwarfs later than ∼M4 sensibly appear to follow a different evolutionary path while young (the bottom plots of Fig <ref> show this further subdivision). Previous studies (see ) have proposed the rotation periods of cool dwarfs do not follow one continuous evolutionary track, characterized by a single powerlaw relationship (P_rot ∝ Age^n), with a `braking index' determined by the exponent (n). This was originally proposed by <cit.>, with an initially determined value of n = 0.5 for that study's target sample, and recent studies have revised this value. <cit.> and <cit.>, for example, each derived an index of n ≈ 0.62, although studied only F and G dwarfs – more massive than the stars studied here – but included stars up to ∼M3. For these data, and the activity-age relationships in the following subsections, a two-segment linear equation was defined using the function and fit to each data set using . The final, fitted age-rotation relationships are shown in Fig <ref> with the best-fitting parameters: M0–2 dwarfs: log Age (Gyr) = 0.0621 [0.0025] × P_rot (days) - 1.0437 [0.0394] for P_rot < 24.0379 [0.8042] log Age (Gyr) = 0.0621 [0.0025] × P_rot (days) - 1.0437 [0.0394] - 0.0533 [0.0026] × (P_rot - 24.0379 [0.8042]) for P_rot≥ 24.0379 [0.8042] M2.5–3.5 dwarfs [M2.5 – M6.5, if P_rot≳24.18]: log Age (Gyr) = 0.0561 [0.0012] × P_rot (days) - 0.8900 [0.0188] for P_rot < 24.1823 [0.4384] log Age (Gyr) = 0.0561 [0.0012] × P_rot (days) - 0.8900 [0.0188] - 0.0521 [0.0013] × (P_rot - 24.1823 [0.4384]) for P_rot≥ 24.1823 [0.4384] M4 – 6.5 dwarfs: log Age (Gyr) = 0.0251 [0.0022] × P_rot (days) - 0.1615 [0.0309] for P_rot < 25.4500 [2.4552] log Age (Gyr) = 0.0251 [0.0022] × P_rot (days) - 0.1615 [0.0309] - 0.0212 [0.0022] × (P_rot - 25.4500 [2.4552]) for P_rot≥ 25.4500 [2.4552] Together, these relationships cover M0–6.5 dwarfs. Within 10 pc of the Sun, this represents ∼72% of all stars with known spectral types (and ∼48% of the wider range of objects, including stellar remnants and brown dwarfs <cit.>). As mentioned early in the paper, an increasing number of M dwarfs are being discovered as exoplanet hosts. Determining the ages of these stars, and thus the ages of their exoplanets, is important when selecting the ideal targets to further study for evidence of habitability or even life. Single-celled organisms originated when the Earth (the only example we currently have for such events) was ∼0.7 – 0.9 Gyr old. The Great Oxygenation of the atmosphere occurred when Earth was ∼2.2 Gyr old, the Cambrian Explosion and rapid diversification of complex lifeforms when Earth was ∼4 – 4.1 Gyr old, and technological civilization didn't occur until the Earth was >4.5 Gyr old. Thus, exoplanet age is an important descriminator in the search for life. To demonstrate a benefit of our relationships, we provide the gyrochronological ages for all ∼M0–6.5 dwarf exoplanet hosts with a listed rotation period in the NASA Exoplanet Archive[<https://exoplanetarchive.ipac.caltech.edu/>] in Table <ref>. The follow-up paper (Engle 2023) will focus on the X-ray and UV activity of M dwarfs over time, and what insights these relationships offer for the stars, their magnetic dynamos, and their suitability to host potentially habitable planets. We do wish to advise restraint, though, when using the first or `young' tracks of these relationships. These ages were included to, as best we could, characterize the fullest evolutionary paths of average early to mid-late M dwarfs. However, as noted earlier in this section, M dwarfs display a wide range of rotation rates at young ages that are not represented by the relationship uncertainties (note the data point vs. relationship uncertainties along the `young' tracks in Fig <ref>). Including estimates from activity-age relationships extends the reliable range of age determinations, and this is discussed in the companion paper. <cit.> theorized that the rotational evolution of main sequence stars showed two possible sequences, dependent on both time and mass (therefore spectral type). The two sequences were termed the Interface (I) and Convective (C) sequences, named after the magnetic dynamos and interior structures of the stellar groups. The more massive, hotter stars spend only a short time on the C sequence before switching to the I sequence. On the I sequence, there is an interface between the radiative and convective regions of the stellar interior, but the magnetic field couples the regions together and much of the stellar interior rotates as a rigid body. Lower mass stars spend longer amounts of time on the C sequence before switching over, and fully convective stars likely never leave the C sequence. <cit.> put forth an analytical model theorizing that stars do not initially evolve as rigid bodies. Rather, as the stellar surface loses angular momentum, a profile of differential rotation builds within the star. Angular momentum is transported from the interior to the surface and eventually the interior of the star `re-couples'. This accounts for the two-track evolutionary path where the second evolutionary track begins after the interior of the star has re-coupled. only calculated their model down to early M dwarf masses. At this mass range, however, the models predict that a ∼4 Gyr old early M dwarf will have a ∼ 31-32 day rotation period, where our data indicates a ∼40-45 day period. To determine the braking indices of our M dwarf subsets, and to serve as an additional comparison to literature results, rotation vs. age data were fitted in linear space with a two-segment powerlaw equation. A best fitting braking index (for the second evolutionary track) was determined to be 0.61 for the early M dwarfs and 0.62 for the mid-late M dwarfs; nearly identical to each other, well within the parameter uncertainties, and in excellent agreement with the results of <cit.> and <cit.>. However, it is again worth noting that based their braking index determination on solar-like F and G dwarfs, and on a comparison of the Praesepe cluster and the Sun. Just over fifty years after <cit.> first discovered the spindown effect operating in solar-type G dwarfs, it is an interesting implication that all cool dwarfs, from late F to ∼M6.5, may perhaps spindown according to the same braking index but simply with different re-coupling timescales. Further investigation into the angular momentum loss of M dwarfs, using methods such as those of <cit.> and <cit.> and comparisons between measures and estimated magnetic field strengths and mass loss rates, are underway for inclusion in a follow-up paper. These data and relationships will be of great use to the field and offer valuable insights into the most populous stellar members of our galaxy, M dwarfs. They allow for reliable ages and evolutionary histories to be determined, but may also offer further insight into the differing dynamo mechanisms at work within the M dwarf subsets and how each mechanism influences, or responds to, the star's evolution over time. § ACKNOWLEDGEMENTS We would like to acknowledge the tireless work of the RCT Consortium members, including former members David Laney and Louis-Gregory Strolger, for the operation and maintenance of the telescope and equipment, without which this study would not have been possible. We also wish to acknowledge the builders and operators of the Skynet telescope network and the Zwicky Transient Facility. Partial support for this project was provided by CXO grant GO0-21020X. SGE thanks Louis Amard for invaluable discussions and access to his M dwarf interior models. SGE also thanks Andrej Prša for very helpful discussions regarding the regression fitting. KPNO:RCT, CTIO:PROMPT astropy <cit.>, Matplotlib <cit.>, Pandas <cit.>, NumPy <cit.>, SciPy <cit.>, AstroImageJ <cit.> aasjournal
http://arxiv.org/abs/2307.02588v1
20230705183422
TransformerG2G: Adaptive time-stepping for learning temporal graph embeddings using transformers
[ "Alan John Varghese", "Aniruddha Bora", "Mengjia Xu", "George Em Karniadakis" ]
cs.LG
[ "cs.LG", "cs.AI", "math.DS" ]
1]Alan John Varghese [1]School of Engineering, Brown University, Providence, RI 02912, USA 2]Aniruddha Bora [2]Division of Applied Mathematics, Brown University, Providence, RI 02912, USA 2,3]Mengjia Xu Corresponding author [3]Center for Brains, Minds and Machines, Massachusetts Institute of Technology, Cambridge, MA 02139, USA 1,2]George Em Karniadakis [Corresponding author]Corresponding author: [email protected] Dynamic graph embedding has emerged as a very effective technique for addressing diverse temporal graph analytic tasks (i.e., link prediction, node classification, recommender systems, anomaly detection, and graph generation) in various applications. Such temporal graphs exhibit heterogeneous transient dynamics, varying time intervals, and highly evolving node features throughout their evolution. Hence, incorporating long-range dependencies from the historical graph context plays a crucial role in accurately learning their temporal dynamics. In this paper, we develop a graph embedding model with uncertainty quantification, TransformerG2G, by exploiting the advanced transformer encoder to first learn intermediate node representations from its current state (t) and previous context (over timestamps [t-1, t-l], l is the length of context). Moreover, we employ two projection layers to generate lower-dimensional multivariate Gaussian distributions as each node's latent embedding at timestamp t. We consider diverse benchmarks with varying levels of “novelty" as measured by the TEA plots. Our experiments demonstrate that the proposed TransformerG2G model outperforms conventional multi-step methods and our prior work (DynG2G) in terms of both link prediction accuracy and computational efficiency, especially for high degree of novelty. Furthermore, the learned time-dependent attention weights across multiple graph snapshots reveal the development of an automatic adaptive time stepping enabled by the transformer. Importantly, by examining the attention weights, we can uncover temporal dependencies, identify influential elements, and gain insights into the complex interactions within the graph structure. For example, we identified a strong correlation between attention weights and node degree at the various stages of the graph topology evolution. Graph embedding transformer dynamic graphs link prediction unsupervised contrastive learning long-term dependencies § INTRODUCTION Numerous real-world datasets, such as social networks, financial networks, biological protein networks, brain networks, citation networks, disease spreading networks, and transportation networks, naturally exhibit complex “graph-like” structures <cit.>. A graph typically consists of a set of nodes that represent entities, and a set of edges that depict relationships between nodes. In the past few years, graph representation learning has gained significant attention for its crucial role in effectively analyzing complex, high-dimensional graph-structured data across various application domains, e.g., drug discovery in healthcare <cit.>, protein structure and property prediction <cit.>, brain network analysis <cit.>, traffic forecasting <cit.>, partial differential equation learning <cit.>. Moreover, graph embedding techniques have gained a lot of success due to the capability to learn highly informative latent graph representations projected in a lower-dimensional space from high-dimensional graphs, while the important and intricate graph topological properties <cit.> can be maximally preserved in the latent embedding space. The lower-dimensional graph embeddings can be readily and efficiently applied for a broad range of downstream graph analytic tasks, such as link prediction, node classification, anomaly detection, recommender systems, and graph completion. However, in practice graphs usually change over time, which can be observed by newly added or removed nodes (or edges), changing node (or edge) labels or features with heterogeneous dynamic behavior. Hence, it is more challenging to learn temporal graph embedding compared to most of the existing works on static graph embeddings <cit.>. In this work, we first present the multi-step DynG2G algorithm, a simple modification to the original DynG2G that helps to incorporate temporal information while learning the embeddings. Then, we propose TransformerG2G, a deep learning model with the transformer architecture at its core, which includes the history of a node while projecting it into the embedding space. We learn embeddings as multivariate Gaussian distributions, where we learn the mean and variance associated with each node. This methodology helps to quantify the uncertainty associated with the embeddings, which is typically neglected in other existing models that consider temporal history. In section <ref> we present a review of the related works on learning graph embeddings in dynamic graphs. In section <ref>, we discuss the methodology. We begin with discussing the definition of dynamic graphs and of the temporal edge appearance (TEA) plot. We also present a brief recap of the original DynG2G model. The multi-step method and TransformerG2G model are also discussed in this section. In section <ref>, we discuss the six benchmark datasets used in this study and their implementation details. We then present the results of the TransformerG2G model for the six benchmark datasets, and focus on the time-dependent attention matrices. In section <ref>, we present the summary of the work, limitations and directions for future developments. § RELATED WORKS Numerous studies have been conducted on static graph embedding using matrix factorization <cit.>, random walk-based approaches <cit.>, and deep neural networks <cit.>, but a major limitation of these methods is that they cannot effectively capture the important temporal information over time. Hence, dynamic graph embedding has emerged as a crucial research area for enhancing conventional graph embedding methods. It can be divided into two main categories: continuous time graph embedding and discrete time graph embedding <cit.>. Continuous time graph embedding aims to learn temporal graph embedding for a series of temporal interactions at higher time resolution <cit.>. Moreover, DyRep <cit.> attempted to model the continuous time graph as “temporal point processes” and incorporate the self-attention mechanism <cit.> to learn temporal relationship embeddings, or using recurrent neural networks (RNNs) to model the dynamics <cit.>. TGN <cit.> is a more generic and computationally efficient method for temporal graph embedding, because it utilizes efficient parallel processing and a single graph attention layer to update node-wise features based on the current memory and temporal node features, which address the main problem of <cit.>. However, in discrete-time graph embedding, most works attempted to learn node embedding over time as lower-dimensional point vectors, i.e., each node is represented as a vector. To capture the temporal dynamics over different time stamps, some works have been conducted based on autoencoder architecture along with transfer learning approach <cit.> or recurrent neural networks (RNNs) <cit.> to capture the dynamics over timestamps. Xu et al. <cit.> firstly presented the “stochastic graph embedding” approach (i.e., DynG2G) for temporal graphs where each graph node at a timestamp can be projected into a latent space as probabilistic density function represented by mean and variance vectors. Therefore, DynG2G provides important uncertainty quantification that facilitates effective identification of the optimal embedding size. Given the great success of graph neural networks (GNN) and graph convolutional networks (GCN) in effective graph representation learning, several recent studies introduced time-dependent graph neural network (GNN) models that combine GNNs or GCNs with RNNs or long short-term memory (LSTM) for temporal graph embedding <cit.>. Nevertheless, the aforementioned methods often fail to accurately capture long-term historical features, especially when the temporal behaviors of nodes or edges exhibit heterogeneous dynamics over timestamps. In recent years, the self-attention mechanism, originally developed for transformers <cit.>, has demonstrated distinct advantages in effectively and efficiently capturing long-range dependencies with adaptive and interpretable attention weights. The mechanism has been successfully applied in diverse domains (e.g., natural language processing, computer vision, and sequence modeling). To incorporate the self-attention mechanism for dynamic graph embedding, studies in <cit.> and <cit.> adopted the self-attention mechanism to jointly encode the spatial and temporal dynamics for discrete time graphs. However, the computational efficiency of the aforementioned methods decreases quadratically when applied to large-scale graphs. More comprehensive dynamic graph embedding works may be also found in some recent surveys <cit.>. In this paper, our main focus lies in spatio-temporal stochastic discrete time graph embedding (improved version of our prior DynG2G <cit.>) that emphasizes long-range dependencies with advanced transformers. Moreover, our proposed TransformerG2G model aims to encode each graph node into a latent space as a “probability density function”. Our approach differs significantly from the aforementioned methods, which typically encode each graph (node or edge) as a “deterministic” lower-dimensional point-vector. However, our TransformerG2G model takes into account the crucial “node uncertainty information” in the latent space, which allows for the effective characterization of a wide range of node properties within the latent space, including node neighborhood diversity and optimal embedding dimensionality. § METHODOLOGY §.§ Preliminaries §.§.§ Dynamic graph definition A dynamic graph can be generally modeled in two ways: “discrete-time” graph snapshots or “continuous-time” graph <cit.>. In our study, we specifically focused on discrete-time graphs, which involve a sequence of graph snapshots captured at different timestamps. Each snapshot represents the state of the graph at a particular time point, whose edges and nodes may change or evolve across these discrete time points. Here, we denote a temporal graph as 𝒢 = {G_t}_t=1^T, where G_t is the graph snapshot at timestamp t. Fig. <ref> depicts six discrete-time temporal graph benchmarks with varying evolutionary dynamics. The top row in each subfigure presents visualizations of four graph snapshots selected from four variant timestamps, and the bottom row shows the corresponding temporal edge appearance (TEA) plot for characterizing the temporal dynamics as described in the Section <ref>. §.§.§ Temporal edge appearance (TEA) plot To characterize the temporal dynamic patterns in dynamic graphs, we utilize the effective temporal edge appearance (TEA) plot proposed in <cit.> to quantify the proportion of repeated edges compared to newly observed edges at each timestamp. Moreover, the average ratio of new edges in each timestamp can be calculated as the novelty index for every benchmark using Eq. <ref>. novelty = 1/T∑_t=1^T |E_t \ E_seen^t|/|E_t|, where E_t represents a set of edges at timestamp t, E_seen^t denotes the set of edges seen in the previous timestamps {1, ⋯, t-1}. In Fig. <ref>, the bottom row in each subfigure presents the corresponding TEA plot, where the blue bars represent the number of repeated edges, and the red bars denote the number of newly added edges. The novelty index values computed for the six dynamic graph benchmarks in a)-f) are 0.0252, 0.0761, 0.7526, 0.9161, 0.9861, and 0.014, respectively. Specifically, Fig. <ref>a) and f) with small novelty index values present relatively smooth changes in dynamics over time, while Fig. <ref>b) presents medium changing dynamics with more graph topological changes in the first 20 snapshots but smaller changes for the last 20 snapshots. In contrast, Fig. <ref>c)-e) achieves the highest novelty values and hence it exhibits highly evolutionary dynamics with a significant number of newly added edges over different timestamps. §.§.§ Stochastic dynamic graph embedding with uncertainty quantification In our prior work <cit.>, we developed a temporal graph embedding method based on the DynG2G model with effective uncertainty quantification. The main framework of DynG2G is illustrated in Fig. <ref>. Given a dynamic graph represented by a sequence of graph snapshots (G_1, G_2, ⋯, G_T) at different timestamps, it first learns an intermediate representation for each node v_i ∈ V_1 in the first graph snapshot with the so-called G2G encoder <cit.> (the detailed G2G model architecture is presented in Fig. <ref>), and outputs the lower-dimensional probabilistic Gaussian embeddings in terms of the mean (μ_i) and the variance (σ_i^2) vectors[The covariance matrix Σ_i is a square diagonal matrix with the variance vector σ_i^2 as its diagonal elements.], with two additional projection heads (i.e., one single-layer linear dense network and the other one is a nonlinear single-layer dense network with “elu” activation function[ elu(x) = x, if x > 0 α(e^x - 1), otherwise]). For the next snapshot training, it employs an extension of the Net2WiderNet approach <cit.> to adaptively expand the network hidden layer size based on the number of changing nodes in the next snapshot. To capture the graph dynamics, we train the encoder in the second timestamp starting with the weights transferred from the pre-trained model for the first graph snapshot. Finally, the model is trained iteratively over the training dataset via optimizing a time-dependent node triplet-based contrastive loss using the Adam optimizer <cit.>. Each node is represented as a lower-dimensional multivariate Gaussian distribution including mean and variance vectors in the latent density function space. A key advantage of our previous work, DynG2G <cit.>, is its ability to uncover the correlation between the optimal embedding size (L_o) and the effective dimensionality of uncertainty (D_u) based on the learned embedding variances across all eight benchmarks. The results in <cit.> also suggest a clear path to selecting the graph optimum embedding dimension by choosing L_o ≥ D_u. However, the DynG2G model has a limitation of neglecting the long-term dependencies over multiple previous timestamps in the temporal graphs. In order to measure the temporal graph correlations between the current graph snapshot and its preceding ones over time, we compute the cosine similarity between graph snapshot at timestamp t (t = 10, 20, ⋯, T) and its preceding ones over the previous 10 timestamps. The corresponding results for all six dynamic graph benchmarks are shown in Fig. <ref>. The red curve in each figure represents the average cosine similarity over all curves corresponding to different time windows (window size = 10). We can observe that the SBM and AS datasets (with low novelty) exhibit subtle changes in graph structures over time, as indicated by the stable high cosine similarities between the current timestamp and the previous 10 timestamps. However, the UCI, Slashdot and Bitcoin datasets (with high novelty) show a sharp drop in the cosine similarity with preceding timestamps, indicating more substantial changes in the graph structures over time compared to the SBM and AS datasets. Fig. <ref> demonstrates varying temporal evolving patterns across different timestamps for different benchmarks, hence highlighting the importance of incorporating historical context to capture temporal dynamics of graphs. We develop two different methods in sections <ref> and <ref> to learn more accurate temporal graph embeddings using long-term historical information, namely, a multi-step method and a transformer-based method. §.§ Dynamic graph embedding with multi-step methods To address the aforementioned problem of our prior DynG2G work <cit.>, we first apply a conventional multi-step method to capture temporal dynamics over timestamps. To this end, in Eq. <ref>, we adopt a two-step method to improve the weight initialization schema in the DynG2G model. Specifically, we initialize the weight matrix W_t at timestamp t using a weighted schema by θ based on the weight matrices learned from timestamps t-1 and t-2. W_t = θ W_t-1 + (1-θ) W_t-2. Note that the aforementioned two-step method is equivalent to our prior DynG2G method when θ = 1. We tested different θ parameters to initialize the weight matrix at timestamp t by transferring the weights learned from the previous two timestamps t-1 and t-2 with coefficients determined by θ and 1-θ. Fig. <ref> shows three bar plots illustrating the mean average precision (MAP) values (see its definition in our prior work <cit.>) for temporal link prediction using SBM, Reality Mining and UCI datasets. For the SBM dataset, varying θ does not result in a significant improvement in MAP or MRR. Since SBM is a synthetic dataset with Markovian behavior, i.e., the graph at the next timestamp depends only on the current timestamp, including more history does not enhance the quality of the learned embeddings. In the case of Reality Mining, we see that it achieves higher MAP and MRR values using the two-step method compared to the ones using DynG2G (θ = 1). An intriguing observation from the temporal link prediction results in Fig. <ref> b) is that the MAP value increases as θ decreases (i.e., placing more emphasis on t-2). In the case of UCI dataset, we do not see an improvement in the MAP or MRR by using the two-step method despite of the fact that the data are non-Markovian, indicating the necessity of using more complex models. Additionally, Table <ref> provides a comprehensive overview of the numerical results for the temporal link prediction task. The table includes the MAP and mean reciprocal rank (MRR) (see its definition in our prior work <cit.>) values achieved by utilizing stochastic graph embeddings learned from the modified DynG2G model with two-step approach. To further improve the expressivity of the DynG2G model <cit.>, we consider a “three-step method” that enables capturing heterogeneous dynamics learned from the previous three timestamps. The specific weight initialization schema is presented in Eq. <ref>. The weight matrix at timestamp t is initialized by transferring the learned weight matrices from three previous timestamps with different coefficients (θ_1, θ_2, θ_3), where θ_1 + θ_2 + θ_3 = 1. W_t = θ_1 W_t-1 + θ_2 W_t-2 + θ_3 W_t-3, where θ_1 + θ_2 + θ_3 = 1. The corresponding temporal link prediction results on the UCI dataset with three-step approach are shown in Table. <ref>. The experimental results show that by leveraging more temporal node context, the three-step method achieves higher MAP and MRR values compared to the two-step method on the UCI dataset. The findings derived from the multi-step method offer great insights into the benefits of increasing the temporal context for accurate and robust dynamic graph embedding learning. In section <ref>, we present a major improvement on the previous DynG2G framework by replacing the G2G encoder with a transformer encoder for capturing long-term dependencies from the input sequence of graph snapshots. §.§ Dynamic graph embedding with transformers We consider a discrete-time temporal graph 𝒢 = {G_t}_t=1^T with a series of graph snapshots across T timestamps, where each timestamp consists of a vertex set V_t = { v_1, v_2, v_3, ⋯, v_|𝒱 _t|} of |𝒱_t| nodes and an edge set E_t = {e_i,j |i,j ∈ |𝒱_t| }. The main goal is to project each node from the high-dimensional dense non-Euclidean space to lower-dimensional density function space as a multivariate Gaussian distribution. This involves utilizing the temporal graph structure changes from the current timestamp as well as previous timestamps, in order to obtain the representation for each node. To achieve this, we propose the TransformerG2G model to learn a mapping from a sequence corresponding to the ith node's history, (v_i^t-l, ⋯, v_i^t-1, v_i^t), to a joint normal distribution h_i^t = 𝒩(μ_i^t, Σ_i^t), with a transformer as the central part of its architecture to capture the temporal dynamics of the graph. During preprocessing, the original adjacency matrices A_t, with shape (n_t, n_t), are modified to obtain Ã_t, ensuring that all the modified adjacency matrices have the same shape of (n, n). Here n = max({ |𝒱 _1|, |𝒱 _2|, ⋯ , |𝒱 _T|}) is the maximum number of nodes across all timestamps, and any nodes absent during a timestamp are represented using zeroes, indicating no connections. The TransformerG2G model takes as input the node history represented by the sequence (v_i^t-l, ⋯, v_i^t-1, v_i^t), where v_i^t ∈ℝ^n and v_i^t corresponds to the ith row of Ã_t. These input vectors are first projected to a d-dimensional space using a linear transformation, followed by a vanilla positional encoding. The resulting vectors are then processed by an encoder block, which outputs a sequence of vectors. The output vectors of the encoder block are concatenated and nonlinearly mapped to obtain a single vector. This vector is then fed through the projection heads to obtain the embeddings μ_i^t ∈ℝ^L_0 and Σ_i^t ∈ℝ^L_0, representing the node v_i^t. The self-attention mechanism in the encoder block enables the model to learn the information from the node's historical context while predicting the node's embedding. Specifically, we employ the scaled dot-product attention mechanism <cit.> in our TransformerG2G model, defined as follows. Attention(Q,K,V) = softmax(QK^T/√(d))V, where Q denotes the query matrix, K denotes the key matrix and V denotes the value matrix. The attention mechanism plays a crucial role in capturing long-term dependencies and modeling relationships between different timestamps in the graph. Next, we discuss the loss function and the training methodology of the TransformerG2G model. The TransformerG2G model is finally trained using a triplet-based contrastive loss defined in Eq. <ref>. A node triplet set 𝒯_t = { (v_i, v_i^near, v_i^far)| v_i ∈ V_t} is sampled such that the shortest path between the reference node v_i and v_i^near is smaller than the shortest path between the reference node v_i and v_i^far, i.e., sp(v_i, v_i^near) < sp(v_i, v_i^far). ℒ = ∑_t ∑_(v_i, v_i^near, v_i^far) ∈𝒯_t[ 𝔼^2_(v_i, v_i^near) + e^-𝔼_(v_i, v_i^far)], where 𝔼_(v_i, v_i^near) and 𝔼_(v_i, v_i^far) denote the Kullback-Leibler divergence (KL divergence) between the embeddings of reference node and near-by node, and the embeddings of reference node and far-away node, respectively. Here, KL divergence is used as a metric for measuring the dissimilarity between the joint normal distributions of two nodes in the embedding space. The specific formula of the KL divergence between the multivariate Gaussian embeddings of two nodes (v_i and v_j) is shown in Eq. <ref>. 𝔼_( v_i, v_j ) = D_KL( 𝒩(μ_i,Σ_i), 𝒩(μ_j, Σ_j) ) = 1/2[ tr(Σ_j^-1Σ_i) + (μ_j - μ_i)^T Σ_j^-1(μ_j-μ_i) - L + log|Σ_j|/|Σ_i|]. By employing the TransformerG2G model, we aim to obtain lower-dimensional multivariate Gaussian representations of nodes, that effectively capture long-term temporal dynamics with variant lengths of temporal node context. § EXPERIMENTS §.§ Dataset descriptions We evaluated our proposed TransformerG2G model on six different dynamic graph benchmarks with varying evolutionary dynamics. The specific graph dataset descriptions are shown in Table <ref>. Stochastic Block Model (SBM) dataset[<https://github.com/IBM/EvolveGCN/tree/master/data>]: It is generated using the Stochastic Block Model (SBM) model. The first snapshot of the dynamic graph is generated to have three equal-sized communities with in-block probability 0.2 and cross-block probability 0.01. To generate subsequent graphs, it randomly picks 10-20 nodes at each timestep and move them to another community. The final generated synthetic SBM graph contains 1000 nodes, 4,870,863 edges and 50 timestamps. Reality Mining dataset[<http://realitycommons.media.mit.edu/realitymining.html>]: The network contains human contact data among 100 students of the Massachusetts Institute of Technology (MIT); the data was collected with 100 mobile phones over 9 months in 2004. Each node represents a student; an edge denotes the physical contact between two nodes. In our experiment, the dataset contains 96 nodes and 1,086,403 undirected edges across 90 timestamps. UC Irvine messages (UCI) dataset[<http://konect.cc/networks/opsahl-ucsocial/>]: It contains sent messages between the users of the online student community at the University of California, Irvine. The UCI dataset contains 1,899 nodes and 59,835 edges across 88 timestamps (directed graph). This dataset too exhibits highly transient dynamics. Slashdot dataset[<http://konect.cc/networks/slashdot-threads/>]: It is a large-scale social reply network for the technology website Slashdot. Nodes represent users and edges correspond to the replies of users. The edges are directed and start from the responding user. Edges are annotated with the timestamp of the reply. The Slashdot dataset contains 50, 824 nodes and 42, 968 edges across 12 timestamps. Bitcoin-OTC (Bit-OTC) dataset[<http://snap.stanford.edu/data/soc-sign-bitcoin-otc.html>]: It is who-trusts-whom network of people who trade using Bitcoin on a platform called Bitcoin OTC. The Bit-OTC dataset contains 5,881 nodes and 35,588 edges across 137 timestamps (weighted directed graph). This dataset exhibits highly transient dynamics. Autonomous Systems (AS) dataset[<https://snap.stanford.edu/data/as-allstats.html>]: It consists of a communication network of who-talks-to-whom from the BGP (Border Gateway Protocol) logs. The dataset can be used for predicting message exchange in the future. The AS dataset used in our experiment contains 65,535 nodes and 13,895 edges with 100 timestamps in total. §.§ Implementation details Evaluating graph embedding algorithms involves two stages: i) model training and obtaining embeddings for every node for each timestamp, ii) evaluating the performance on a downstream task, such as link prediction and node classification. In our experiments, downstream link prediction task is performed to demonstrate the goodness of the learned embeddings. The accuracy metrics on the link prediction task are used as a proxy to quantify the performance of the embedding algorithm. For training both the TransformerG2G model and the classifier, we utilize the first 70% of the timestamps for training, the next 10% of timestamps for validation and the remaining 20% of the timestamps for testing. The number of train/val/test timestamps for each dataset is shown in Table <ref>. TransformerG2G: The dimensionality n of the vectors in the input sequence to TransformerG2G is the maximum number of nodes in the graph dataset and is shown in Table <ref> for all the benchmark datasets. The first linear layer projects these input vectors to a d-dimensional space. We use d = 256 for SBM, Reality Mining, Slashdot and Autonomous System datasets, and d=512 for UCI and Bitcoin datasets. In the transformer, we used one encoder layer with a single attention head. In the nonlinear layer after the encoder, a tanh activation function and the vectors were projected to a 512 dimensional space, similar to the DynG2G paper. In the nonlinear projection head we used an elu activation function. To optimize the weights of the TransformerG2G model, we used the Adam optimizer for all datasets. The learning rate was set to 1e-4 for SBM, Reality Mining and Autonomous Systems. In the case of UCI, Slashdot and Bitcoin we used a learning rate of 1e-6. The first 70% of the timestamps was used for training the TransformerG2G model, and the next 10% of the timestamps was used for validation. Once the model was trained, we used the trained model to predict and save the embeddings of the nodes for all the timestamps. Please note that we have a single model for all the timestamps, whereas DynG2G had a different model for each training timestamp. Link prediction task: In the downstream link prediction task, we used a classifier to predict whether two nodes have a direct link or not, given the embeddings of the two nodes. The classifier was a MLP with one hidden layer with L_0 neurons in the hidden layer and ReLU activation. The classifier takes as input the embeddings (μ_i and μ_j) of two nodes. The classifier was trained to minimize the weighted binary cross-entropy loss using an Adam optimizer with a learning rate of 1e-4. For this task as well, the data is split into train/val/test as shown in Table <ref>. The classifier was trained on the first 70% of the timestamps, and is used to predict the links on the testing timestamps. The average MAP and MRR values over the graphs during the testing timestamps are reported to quantify the goodness of the embeddings. §.§ Baseline To evaluate the performance of our model, TransformerG2G, in the task of temporal link prediction, we compare the proposed TransformerG2G with five other baselines: DynGEM <cit.>, dyngraph2vecAE <cit.>, dyngraph2vecAERNN <cit.>, EvolveGCN <cit.>, and DynG2G <cit.>. Brief descriptions about these methods are presented below. * DynGEM <cit.>: This is a dynamic graph embedding method that uses autoencoders to obtain graph embeddings. The weights of the previous timestamp are used to initialize the weights of the current timestamp, and thereby makes the training process faster. * dyngraph2vecAE <cit.>: This method uses temporal information in dynamic graphs to obtain embeddings using an autoencoder. The encoder takes in the historical information of the graph, and the decoder reconstructs the graph at the next timestamp, and the latent vector corresponds to the embedding of the current timestamp. * dyngraph2vecAERNN <cit.>: This method is similar to dyngraph2vecAE, except that the feed-forward neural network in the encoder is replaced by LSTM layers. This method also incorporates historical information while predicting graph embeddings. * EvolveGCN <cit.>: A graph convolution network (GCN) forms the core of this model. A recurrent neural network (RNN) is used to evolve the GCN parameters along time. * DynG2G <cit.>: This method uses the G2G encoder at its core to obtain the embeddings. The weights from the previous timestamp are used to initialize the weights of the next timestamp to make the training faster. Unlike the previous methods, DynG2G includes information about uncertainty of the embeddings. §.§ Comparisons of TransformerG2G and other baselines In Table <ref> we show the results for temporal link prediction on the six benchmark datasets, using embeddings obtained from our TransformerG2G model. We also visualize the MAP values using bar plots for temporal link prediction across six different benchmarks in Fig. <ref>. In Fig. <ref>, the blue bars shows the MAP using the embeddings achieved by the DynG2G model, and the orange bars shows the MAP using the embeddings achieved by our proposed TransformerG2G method. In the case of SBM dataset, we see that varying the lookback changes the MAP only slightly. This behaviour is expected since SBM is a Markovian dataset, and therefore including more history would not improve the MAP and MRR. In the case of Reality Mining, we see an improvement in MAP till l = 4. The MAP is better than that of DynG2G for all lookbacks. In the case of UCI dataset, the mean MAP at l = 5 is larger than that for DynG2G. We also see a large variance in the MAP for UCI, possibly due to the rapid change in the number of nodes in this dataset over time. In the case of Slashdot, the MAPs using the TransformerG2G model is lower than that for the DynG2G model for all the lookbacks. This is due to the very high novelty in this dataset, which indicates that Slashdot has highly evolutionary dynamics and therefore it is difficult for the TransformerG2G model to accurately learn the dynamics. In the case of Bitcoin, the MAP values using the TransformerG2G model is higher than that using the DynG2G model for all lookbacks. The novelty of Bitcoin is similar to that of Slashdot, however, Bitcoin has 137 timestamps compared to Slashdot with 12 timestamps. The larger number of timestamps translates to more data, allowing the TransformerG2G model to learn the patterns in the data despite having a high novelty. In the case of AS dataset as well, we were able to attain higher MAP using the TransformerG2G model. In the Table <ref> we show the comparison of the temporal link prediction task against other embedding algorithms for the six benchmarks. We see that the TransformerG2G model outperforms other methods for all datasets except SBM and Slashdot dataset. The MAP and MRR values for the SBM dataset of TransformerG2G is very close to that of DynG2G and better than other methods. We did not see an improvement in MAP here due to the Markovian nature of the dataset. In the case of Slashdot, TransformerG2G did not perform well because of the low number of timestamps and the high novelty of the dataset. Note that in the multi-step method, the importance given to different historical timestamps is a hyperparameter chosen by the user, whereas in TransformerG2G, the attention mechanism at its core adaptively assigns the importance to the different timestamps. §.§ Time-dependent attention matrix We visualize the attention matrices (i.e., softmax(QK^T/√(d)) of the trained TransformerG2G model in to gain insights about which timestamps are prioritized by the transformer while predicting the embeddings. Figure <ref> a) shows the attention matrix for a randomly selected node (ID No.6) from the Reality Mining dataset at different timestamps from the transformer model with l = 4. The [4-i,4-j]-th entry in the attention matrix corresponding to timestamp t denotes the temporal feature importance given to the representation of the selected node at timestamp t-j by the representation of the same node at timestamp t-i. To further comprehend the patterns in the attention matrices, we show the node degree of the current timestamp and its context in Fig.<ref>. We see that the model assigns more importance to timestamps where the node has higher connectivity (higher node degree). From timestamps 54 to 63, the selected node is absent from the graph since it has node degree 0 as evident from Fig <ref> b). We can see the reflection of this in the attention matrices from timestamps 54 to 63. The attention matrices do not assign significance to a particular historical timestamp and is confused which historical timestamp it should prioritize. § SUMMARY We presented the multi-step DynG2G and TransformerG2G deep learning models to learn embeddings in dynamic graphs. Our models leverage information about the temporal dynamics in the graph predicting embeddings, and also quantify the uncertainty associated with the node embeddings. In the multi-step method, we use a weighted scheme to initialize the weights of deep learning. The TransformerG2G model, based on the transformer architecture, uses the attention mechanism to adaptively give importance to the previous timestamps. We conducted experiments with varying lookbacks l, and found that the TransformerG2G model achieves better performance on link prediction task on six benchmark datasets. We also showed that the temporal evolution of the attention matrices in the TransformerG2G model follows the evolution of the node degree, and thereby pays attention to timestamps where the node has a higher connectivity. One of the limitations of this work is that we use the same embedding dimension during all timestamps. Using an adaptive embedding dimension that is optimal for each timestamp could improve the quality of the learned embeddings. This could further lead to better performance in the downstream link prediction tasks. In this work, we used the vanilla positional embedding and scaled dot-product attention within the transformer. It might be useful to explore other attention mechanisms and positional encoding methods. Future works could also look at the extension of the attention-based methods for learning embeddings in hyperbolic space. § ACKNOWLEDGEMENTS We would like to acknowledge support by the DOE SEA-CROGS project (DE-SC0023191) and the ONR Vannevar Bush Faculty Fellowship (N00014-22-1-2795). 10 url<#>1urlprefixURL href#1#2#2 #1#1 velivckovic2023everything P. Veličković, Everything is connected: Graph neural networks, Current Opinion in Structural Biology 79 (2023) 102538. salha2021fastgae G. Salha, R. Hennequin, J.-B. Remy, M. Moussallam, M. Vazirgiannis, Fastgae: Scalable graph autoencoders with stochastic subgraph decoding, Neural Networks 142 (2021) 1–19. yi2022graph H.-C. Yi, Z.-H. You, D.-S. Huang, C. K. Kwoh, Graph representation learning in bioinformatics: trends, methods and applications, Briefings in Bioinformatics 23 (1) (2022) bbab340. li2022graph M. M. Li, K. Huang, M. Zitnik, Graph representation learning in biomedicine and healthcare, Nature Biomedical Engineering (2022) 1–17. jumper2021highly J. Jumper, R. Evans, A. Pritzel, T. Green, M. Figurnov, O. Ronneberger, K. Tunyasuvunakool, R. Bates, A. Žídek, A. Potapenko, et al., Highly accurate protein structure prediction with alphafold, Nature 596 (7873) (2021) 583–589. fang2022geometry X. Fang, L. Liu, J. Lei, D. He, S. Zhang, J. Zhou, F. Wang, H. Wu, H. Wang, Geometry-enhanced molecular representation learning for property prediction, Nature Machine Intelligence 4 (2) (2022) 127–134. xu2020new M. Xu, Z. Wang, H. Zhang, D. Pantazis, H. Wang, Q. Li, A new graph Gaussian embedding method for analyzing the effects of cognitive training, PLoS Computational Biology 16 (9) (2020) e1008186. xu2021graph M. Xu, D. L. Sanz, P. Garces, F. Maestu, Q. Li, D. Pantazis, A graph Gaussian embedding method for predicting Alzheimer's disease progression with MEG brain networks, IEEE Transactions on Biomedical Engineering. chen2023traffic Y. Chen, K. Li, C. K. Yeo, K. Li, Traffic forecasting with graph spatial–temporal position recurrent network, Neural Networks 162 (2023) 340–349. guo2021learning S. Guo, Y. Lin, H. Wan, X. Li, G. Cong, Learning dynamics and heterogeneity of spatial-temporal graph data for traffic forecasting, IEEE Transactions on Knowledge and Data Engineering 34 (11) (2021) 5415–5428. li2023dynamic F. Li, J. Feng, H. Yan, G. Jin, F. Yang, F. Sun, D. Jin, Y. Li, Dynamic graph convolutional recurrent network for traffic prediction: Benchmark and solution, ACM Transactions on Knowledge Discovery from Data 17 (1) (2023) 1–21. iakovlev2020learning V. Iakovlev, M. Heinonen, H. Lähdesmäki, Learning continuous-time pdes from sparse data with graph neural networks, arXiv preprint arXiv:2006.08956. eliasof2021pde M. Eliasof, E. Haber, E. Treister, Pde-gcn: novel architectures for graph neural networks motivated by partial differential equations, Advances in neural information processing systems 34 (2021) 3836–3849. xu2020understanding M. Xu, Understanding graph embedding methods and their applications, SIAM Review 63 (4) (2021) 000–000. grover2016node2vec A. Grover, J. Leskovec, node2vec: Scalable feature learning for networks, in: Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining, 2016, pp. 855–864. bojchevski2018deep A. Bojchevski, S. Günnemann, Deep gaussian embedding of graphs: Unsupervised inductive learning via ranking, in: International Conference on Learning Representations, 2018, pp. 1–13. ou2016asymmetric M. Ou, P. Cui, J. Pei, Z. Zhang, W. Zhu, Asymmetric transitivity preserving graph embedding, in: Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining, 2016, pp. 1105–1114. tang2015line J. Tang, M. Qu, M. Wang, M. Zhang, J. Yan, Q. Mei, Line: Large-scale information network embedding, in: Proceedings of the 24th international conference on world wide web, 2015, pp. 1067–1077. perozzi2014deepwalk B. Perozzi, R. Al-Rfou, S. Skiena, Deepwalk: Online learning of social representations, in: Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, 2014, pp. 701–710. wang2016structural D. Wang, P. Cui, W. Zhu, Structural deep network embedding, in: Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining, 2016, pp. 1225–1234. bojchevski2017deep A. Bojchevski, S. Günnemann, Deep gaussian embedding of graphs: Unsupervised inductive learning via ranking, arXiv preprint arXiv:1707.03815. salha2022modularity G. Salha-Galvan, J. F. Lutzeyer, G. Dasoulas, R. Hennequin, M. Vazirgiannis, Modularity-aware graph autoencoders for joint community detection and link prediction, Neural Networks 153 (2022) 474–495. zhou2018dynamic L. Zhou, Y. Yang, X. Ren, F. Wu, Y. Zhuang, Dynamic network embedding by modeling triadic closure process, in: Proceedings of the AAAI conference on artificial intelligence, Vol. 32, 2018, pp. 571–578. zuo2018embedding Y. Zuo, G. Liu, H. Lin, J. Guo, X. Hu, J. Wu, Embedding temporal network via neighborhood formation, in: Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining, 2018, pp. 2857–2866. trivedi2019dyrep R. Trivedi, M. Farajtabar, P. Biswal, H. Zha, Dyrep: Learning representations over dynamic graphs, in: International conference on learning representations, 2019, pp. 1–25. vaswani2017attention A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, I. Polosukhin, Attention is all you need, Advances in neural information processing systems 30. kumar2019predicting S. Kumar, X. Zhang, J. Leskovec, Predicting dynamic embedding trajectory in temporal interaction networks, in: Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining, 2019, pp. 1269–1278. rossi2020temporal E. Rossi, B. Chamberlain, F. Frasca, D. Eynard, F. Monti, M. Bronstein, Temporal graph networks for deep learning on dynamic graphs, arXiv preprint arXiv:2006.10637. xu2020inductive D. Xu, C. Ruan, E. Korpeoglu, S. Kumar, K. Achan, Inductive representation learning on temporal graphs, arXiv preprint arXiv:2002.07962. goyal2018dyngem P. Goyal, N. Kamra, X. He, Y. Liu, Dyngem: Deep embedding method for dynamic graphs, arXiv preprint arXiv:1805.11273. goyal2020dyngraph2vec P. Goyal, S. R. Chhetri, A. Canedo, dyngraph2vec: Capturing network dynamics using dynamic graph representation learning, Knowledge-Based Systems 187 (2020) 104816. xu2022dyng2g M. Xu, A. V. Singh, G. E. Karniadakis, Dyng2g: An efficient stochastic graph embedding method for temporal graphs, IEEE Transactions on Neural Networks and Learning Systems. qu2020continuous L. Qu, H. Zhu, Q. Duan, Y. Shi, Continuous-time link prediction via temporal dependent graph neural network, in: Proceedings of The Web Conference 2020, 2020, pp. 3026–3032. pareja2020evolvegcn A. Pareja, G. Domeniconi, J. Chen, T. Ma, T. Suzumura, H. Kanezashi, T. Kaler, T. Schardl, C. Leiserson, Evolvegcn: Evolving graph convolutional networks for dynamic graphs, in: Proceedings of the AAAI conference on artificial intelligence, Vol. 34, 2020, pp. 5363–5370. sankar2020dysat A. Sankar, Y. Wu, L. Gou, W. Zhang, H. Yang, Dysat: Deep neural representation learning on dynamic graphs via self-attention networks, in: Proceedings of the 13th international conference on web search and data mining, 2020, pp. 519–527. goyal2020graph P. Goyal, Graph embedding algorithms for attributed and temporal graphs, ACM SIGWEB Newsletter 2020 (Spring) (2020) 1–4. ji2021survey S. Ji, S. Pan, E. Cambria, P. Marttinen, S. Y. Philip, A survey on knowledge graphs: Representation, acquisition, and applications, IEEE transactions on neural networks and learning systems 33 (2) (2021) 494–514. xie2020survey Y. Xie, C. Li, B. Yu, C. Zhang, Z. Tang, A survey on dynamic network embedding, arXiv preprint arXiv:2006.08093. xue2022dynamic G. Xue, M. Zhong, J. Li, J. Chen, C. Zhai, R. Kong, Dynamic network embedding survey, Neurocomputing 472 (2022) 212–223. poursafaei2022towards F. Poursafaei, S. Huang, K. Pelrine, R. Rabbany, Towards better evaluation for dynamic link prediction, arXiv preprint arXiv:2207.10128. chen2015net2net T. Chen, I. Goodfellow, J. Shlens, Net2net: Accelerating learning via knowledge transfer, arXiv preprint arXiv:1511.05641. kingma2014adam D. P. Kingma, J. Ba, Adam: A method for stochastic optimization, arXiv preprint arXiv:1412.6980.
http://arxiv.org/abs/2307.00320v1
20230701121252
Automatic Solver Generator for Systems of Laurent Polynomial Equations
[ "Evgeniy Martyushev", "Snehal Bhayani", "Tomas Pajdla" ]
cs.CV
[ "cs.CV" ]
In computer vision applications, the following problem often arises: Given a family of (Laurent) polynomial systems with the same monomial structure but varying coefficients, find a solver that computes solutions for any family member as fast as possible. Under appropriate genericity assumptions, the dimension and degree of the respective polynomial ideal remain unchanged for each particular system in the same family. The state-of-the-art approach to solving such problems is based on elimination templates, which are the coefficient (Macaulay) matrices that encode the transformation from the initial polynomials to the polynomials needed to construct the action matrix. Knowing an action matrix, the solutions of the system are computed from its eigenvectors. The important property of an elimination template is that it applies to all polynomial systems in the family. In this paper, we propose a new practical algorithm that checks whether a given set of Laurent polynomials is sufficient to construct an elimination template. Based on this algorithm, we propose an automatic solver generator for systems of Laurent polynomial equations. The new generator is simple and fast; it applies to ideals with positive-dimensional components; it allows one to uncover partial p-fold symmetries automatically. We test our generator on various minimal problems, mostly in geometric computer vision. The speed of the generated solvers exceeds the state-of-the-art in most cases. In particular, we propose the solvers for the following problems: optimal 3-view triangulation, semi-generalized hybrid pose estimation and minimal time-of-arrival self-calibration. The experiments on synthetic scenes show that our solvers are numerically accurate and either comparable to or significantly faster than the state-of-the-art solvers. Laurent polynomial, elimination template, generalized eigenvalue problem, minimal problem. Automatic Solver Generator for Systems of Laurent Polynomial Equations Evgeniy Martyushev, Snehal Bhayani, and Tomas Pajdla E. Martyushev is with the Department of Mathematical Analysis and Mathematics Education, Institute of Natural Sciences and Mathematics, South Ural State University, Chelyabinsk, Lenin avenue 76, Russia. E-mail: [email protected] S. Bhayani is with the Center for Machine Vision and Signal Analysis, University of Oulu, Finland E-mail: [email protected] T. Pajdla is with the Czech Institute of Informatics, Robotics, and Cybernetics, Czech Technical University in Prague, Prague 6, Czech Republic E-mail: [email protected] ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= § INTRODUCTION § INTRODUCTION Many problems of applied science can be reduced to finding common roots of a system of multivariate (Laurent) polynomial equations. Such problems arise in chemistry, mathematical biology, theory of ODE's, geodesy, robotics, kinematics, acoustics, geometric computer vision, and many other areas. For some problems, it is only required to find all (or some) roots of a particular polynomial system, and the root-finding time does not matter much. In contrast, other problems require finding roots for a family of polynomial systems with the same monomial structure, but different coefficient values. For a given set of coefficients, the roots must be found quickly and with acceptable accuracy. Under appropriate genericity assumptions on the coefficients, the dimension, and degree of the corresponding polynomial ideal remain unchanged. The state-of-the-art approach to solving such problems is to use symbolic-numeric solvers based on elimination templates <cit.>. These solvers have two main parts. In the first offline part, an elimination template is constructed. The template consists of a map (formulas) from input data to a (Macaulay) coefficient matrix. The structure of the template is the same for each set of coefficients. In the second online phase, the coefficient matrix is filled with the data of a particular problem, reduced by the Gauss–Jordan (G–J) elimination, and used to construct an eigenvalue/eigenvector computation problem of an action matrix that delivers the solutions of the system. While the offline phase is not time critical, the online phase has to be computed very fast (usually in sub-milliseconds) to be useful for robust optimization based on the RANSAC schemes <cit.>. The speed of the online phase is mainly determined by two operations, namely the G–J elimination of the template matrix and the eigenvalue/eigenvector computation of the action matrix. Therefore, one approach to generating fast solvers, is to find elimination templates that are as small as possible. The size of the elimination templates affects not only the speed of the resulting solvers, but also their numerical stability. The latter is more subtle, but experiments show that the larger templates have worse stability without special stability enhancing techniques, see e.g. <cit.>. §.§ Contribution We propose a new automatic generator of elimination templates for efficiently solving systems of Laurent polynomial equations. The advantages of our generator are as follows. * Flexibility: It finds elimination templates for a possibly redundant number of roots. In some cases, it can significantly reduce the template size and thus speed up the root computation. * Versatility: (i) It is applicable to polynomial ideals with positive-dimensional components; (ii) It is also applicable to uncovering the partial p-fold symmetries to generate smaller templates. * Simplicity: By and large, it uses only manipulations with sets of monomials and G–J elimination on matrices over a finite field. We demonstrate our method on a variety of minimal problems mostly in geometric computer vision. For many of them, we have constructed solvers that are faster than the state-of-the-art. We propose a solver for the famous problem of optimal 3-view triangulation <cit.>, which is naturally formulated as a system of Laurent polynomial equations. Our solver for this problem is numerically accurate and slightly faster than the state-of-the-art solvers from <cit.>. We also propose a fast solver for the semi-generalized hybrid pose estimation problem <cit.>. Defined as the problem of estimating the relative pose of a pinhole camera with unknown focal length a calibrated generalized camera, from a hybrid set of one 2D-2D and three 2D-3D point correspondences, its original formulation in <cit.> used a homography-based formulation along with the elimination ideal method <cit.>. However, this led to large expressions for the polynomial coefficients, resulting in slow solvers. In comparison, our solver relies on a depth-based formulation that results in a Laurent polynomial system. The coefficients of this system are much simpler expressions. Therefore, the solver generated using our proposed AG is 20-30 times faster than the solvers based on the homography formulation. Finally, we propose solvers for the 4s/6r and 5s/5r Time-of-Arrival minimal problems <cit.>. Our solvers have comparable numerical accuracy and are 1.3-1.8 times faster than the state-of-the-art solvers from <cit.>. §.§ Related work Elimination templates are matrices that encode the transformation from polynomials of the initial system to polynomials needed to construct the action matrix. Knowing an action matrix, the solutions of the system are computed from its eigenvectors. Automatic generator (AG) is an algorithm that takes a polynomial system as input and outputs an elimination template for the action matrix computation. Automatic generators: The first automatic generator was built in <cit.>, where the template was constructed iteratively by expanding the initial polynomials with their multiples of increasing degree. This AG has been widely used by the computer vision community to construct polynomial solvers for a variety of minimal problems,  <cit.>, see also <cit.>. Paper <cit.> introduced a non-iterative AG based on tracing the Gröbner basis construction and subsequent syzygy-based reduction. This AG allowed fast template construction even for hard problems. An alternative AG based on the use of sparse resultants was proposed in <cit.>. This method, together with <cit.>, are currently the state-of-the-art automatic template generators. Improving stability: The standard way to construct the action matrix from a template requires performing its LU decomposition. For large templates, this operation often leads to significant round-off and truncation errors, and hence to numerical instability. The series of papers <cit.> addressed this problem and proposed several methods of improving stability, e.g. by performing a QR decomposition with column pivoting on the step of constructing the action matrix from a template. Optimizing formulations: Choosing an appropriate formulation of a minimal problem can drastically simplify finding its solutions. The paper <cit.> proposed the variable elimination strategy, which reduces the number of unknowns in the initial polynomial system. For some problems, this strategy led to significantly smaller templates <cit.>. Optimizing templates: Much effort has been spent on speeding up the action matrix method by optimizing the template construction step. The paper <cit.> introduced a method to optimize templates by removing some unnecessary rows and columns. The method in <cit.> exploited the sparsity of elimination templates by converting a large sparse template into the so-called singly-bordered block-diagonal form. This allowed splitting the initial problem into several smaller subproblems that are easier to solve. In paper <cit.>, the authors proposed two methods that significantly reduced the size of elimination templates. The first method used the so-called Gröbner fan of a polynomial ideal to construct templates all possible standard bases of the quotient space. The second method went beyond Gröbner bases and introduced a random sampling strategy to construct non-standard bases. In <cit.>, the authors proposed a heuristic greedy optimization strategy to reduce the templates obtained by the non-iterative AG from <cit.>. Optimizing root solving: Complex roots are spurious for most problems arising in applications. The paper <cit.> introduced two methods to avoid the computation of complex roots, resulting in a significant speedup of polynomial solvers. Discovering symmetries: Polynomial systems for certain minimal problems may have hidden symmetries. Uncovering these symmetries is another way to optimize templates. This approach was demonstrated for the simplest partial p-fold symmetries in <cit.>. A more general case was studied in <cit.>. Laurent polynomial ideals: Some application problems can be naturally formulated as a system of Laurent polynomial equations, and only the toric roots of the system are of interest. Clearly, any Laurent polynomial equation can be transformed either into an ordinary polynomial equation by taking its numerator, or into a system of ordinary polynomial equations by introducing new variables. It follows that any AG for ordinary polynomials can be also applied to Laurent polynomials. However, such an approach can have unwanted consequences: increasing the number of variables, increasing the total degree of polynomials, introducing false (non-toric) roots. All this can complicate the root-finding process. Working directly in the Laurent polynomial ring is preferable as it provides more “degrees of freedom” in choosing action polynomial and constructing shifts of the initial polynomials. The Gröbner and the border bases for Laurent polynomial ideals were introduced in <cit.> and <cit.> respectively. An eigenvalue method for solving square systems of Laurent polynomial equations has been proposed in <cit.>. For Laurent systems with more polynomials than the number of variables, non-square systems, a sparse resultant-based method has been proposed in <cit.> which uses Newton polytopes <cit.> to generate the elimination template as a resultant matrix. The most related work: Our work is essentially based on the results of papers <cit.>. § SOLVING SETS OF LAURENT MONOMIALS We use 𝕂 for a field, X = {x_1, …, x_k} for a set of k variables, R = 𝕂[X, X^-1] for the 𝕂-algebra of Laurent polynomials over 𝕂. Let F = {f_1, …, f_s}⊂ R∖𝕂 and J = ⟨ F⟩ be the ideal generated by F. Let 𝒱 = {p ∈ (𝕂∖{0})^k f_1(p) = … = f_s(p) = 0} be the set of common roots of F. We assume that 𝒱 is 0-dimensional, it is a finite set of points. More generally, 𝒱 is reducible and one of its components is 0-dimensional, 𝒱 = 𝒱∪𝒱_0 with 𝒱_0 = 0. The positive-dimensional variety 𝒱 consists of superfluous unfeasible roots. This case was addressed in <cit.> for polynomial systems. In the sequel, we assume that 𝒱 = 0. It is clear that there exists (α_1^j, …, α_k^j) ∈ℤ^k_≥ 0 such that f_j = x_1^α_1^j… x_k^α_k^jf_j ∈𝕂[X] for each j = 1, …, s. Thus, 𝒱 can be also obtained as a set of common roots of the polynomial system F = 0, where F = {f_1, …, f_s}. However, the use of F instead of F may result in the appearance of superfluous roots that do not belong to the torus (𝕂∖{0})^k. Saturating these roots is an additional non-trivial problem in general. Furthermore, the total degrees of the polynomials in F can increase significantly, which can lead to larger elimination templates. In contrast, our examples show that working directly with the Laurent polynomials leads to smaller elimination templates and thus to faster solvers, Problems #35 and #36 in Tab. <ref> below. We start by generalizing the definition of solving bases (in this paper we will use the term "solving sets") from <cit.> for Laurent polynomials. For simplicity, we restrict ourselves to the solving sets consisting of monomials. Let U = {x_1^α_1… x_k^α_k (α_1, …, α_k) ∈ℤ^k} be the set of Laurent monomials in X. We denote by 𝒜 the vector consisting of the elements of a finite set of Laurent monomials 𝒜⊂ U which are ordered according to a certain total ordering on U, the graded reverse lex ordering (grevlex) with x_1 > … > x_k which compares monomials first by their total degree, α_1 + … + α_k, and breaks ties by smallest degree in x_k, x_k - 1, etc. Note that grevlex is not a well-ordering on U, but this is of no importance for our purposes. Let ℬ⊂ U and a ∈ R∖𝕂. Let us define the vector C := a T_1 ℬ - T_0 ℬ∈ R^d, where d = #ℬ, T_0, T_1 ∈𝕂^d × d, and T_1 ≠ 0. The set of monomials ℬ is called the solving set for the ideal J if the following condition holds: (C1) C ⊂ J, each element of C is a Laurent polynomial from J. In this case the polynomial a is called the action polynomial. If ℬ is a solving set for J, then C(p) = 0 for any p ∈𝒱 and hence we come to the generalized eigenproblem <cit.> T_0 ℬ(p) = a(p) T_1 ℬ(p). It follows that a(p) ∈σ(T_0,T_1) = {λ∈𝕂(T_0 - λ T_1) = 0}. In this paper we restrict ourselves to the case T_1 ≠ 0, which guarantees that the set σ(T_0,T_1) is finite <cit.>. Since the matrix T_1 is invertible, the problem (<ref>) can be solved as the regular eigenproblem for the action matrix T_1^-1T_0. The drawback of such an approach is that an ill-conditioned matrix T_1 can cause significant inaccuracies in the computed eigenvalues. On the other hand, there is a numerically backward stable QZ algorithm <cit.> for solving the problem (<ref>). For each p ∈𝒱 there exists λ∈σ(T_0,T_1) such that a(p) = λ. If the related eigenspace (T_0 - λ T_1) is 1-dimensional and u is its basis vector, then u = ℬ(p) up to scale. Note that the vector C may vanish at a point p ∉𝒱. Therefore the set {a(p) p ∈𝒱} may be a proper subset of σ(T_0,T_1), it may happen that d > #𝒱. In this case, the solving set is said to be redundant <cit.>. It may also happen that d = #𝒱 or d < #𝒱. The latter case applies e.g. to systems with the partial p-fold symmetries <cit.>. Next, given a solving set ℬ let us introduce the following additional condition: (C2) for each variable x_i ∈ X there is an element b_i ∈ℬ such that x_i· b_i ∈ℬ. Condition (C2) guarantees that the root p can be directly computed from the eigenvector u. If x_i· b_i = b' and the elements b_i and b' are at the rth and qth positions of vector ℬ respectively, then x_i(p) = u^q/u^r, where u^q and u^r are the qth and rth entries of vector u respectively. On the other hand, if ℬ does not satisfy condition (C2), then additional computations may be required to derive roots. To summarize, knowing the solving set ℬ, which additionally satisfies condition (C2), together with the Laurent polynomials from J = ⟨ F⟩, which have the form (<ref>), allows one to compute the roots of the system F = 0. The main question is, how to find the solving sets? For this purpose we propose to use elimination templates and the incremental approach similar to that from <cit.>. § MACAULAY MATRICES AND ELIMINATION TEMPLATES Given a Laurent polynomial f, we denote by U_f the support of f, U_f = {m ∈ U c(f, m) ≠ 0}, where c(f, m) is the coefficient of f at monomial m. Given a set of Laurent polynomials F = {f_1, …, f_s}, we denote by U_F the support of F, U_F = ⋃_i = 1^s U_f_i. Let n = # U_F be the cardinality of the finite set U_F. The Macaulay matrix M(F) ∈𝕂^s× n is defined as follows: its (i, j)th element is the coefficient c(f_i, m_j) of the polynomial f_i ∈F at the monomial m_j ∈ U_F, M(F)_ij = c(f_i, m_j). Thus, M(F) U_F = 0 is the vector form of the Laurent polynomial system F = 0. A shift of a polynomial f is a multiple of f by a monomial m ∈ U. Let A = (A_1, …, A_s) be an ordered s-tuple of finite sets of monomials A_j ⊂ U for all j. We define the set of shifts of F as A· F = {m · f_j m ∈ A_j, f_j ∈ F}. Let a be a Laurent polynomial and ℬ be a finite subset of Laurent monomials from U_A· F such that U_a m⊂ U_A· F for each m ∈ℬ. We define the two subsets ℛ = ⋃_b ∈ U_a{b m m ∈ℬ}∖ℬ, ℰ = U_A· F∖ (ℛ∪ℬ). Clearly, the subsets ℬ, ℛ, ℰ are pairwise disjoint and U_A· F = ℰ∪ℛ∪ℬ. A Macaulay matrix M(A· F) with columns arranged in ordered blocks M(A · F) = [ M_ℰ M_ℛ M_ℬ ] is called the elimination template for F a if the reduced row echelon form of M(A· F) is M(A· F) = ℰ ℛ ℬ * 0 * 0 I M_ℬ 0 0 0, where * means a submatrix with arbitrary entries, 0 is the zero matrix of a suitable size, I is the identity matrix of order #ℛ and M_ℬ is a matrix of size #ℛ×#ℬ. It follows from the definition that if a Macaulay matrix M(A· F) is an elimination template, then the set ℬ is the solving set for J = ⟨ F⟩. On the other hand, the action polynomial a, the s-tuple of sets A and the solving set ℬ uniquely determine the elimination template M(A· F) (up to reordering its rows and columns in M_ℰ, M_ℛ, M_ℬ). § AUTOMATIC SOLVER GENERATOR Our automatic solver generator consists of two main steps: (i) finding an elimination template for a set of Laurent polynomials (); (ii) reducing the template by removing all its unnecessary rows and columns (). Both steps are essentially based on the procedure that checks whether a given set of polynomials is sufficient to construct an elimination template for a given action polynomial (). To speed up the computation, both steps are performed over a finite field of sufficiently large order. We assume that there exists a generic instance of the problem with coefficients in this field. §.§ Elimination template test For the sake of brevity, we denote the support U_F of a finite set of Laurent polynomials F by 𝒰. Given a Laurent polynomial a, we define the set of permissible monomials <cit.> as 𝒫 = ⋂_b ∈ U_a{m ∈𝒰 b m ∈𝒰}, the set of reducible monomials as ℛ = ⋃_b ∈ U_a{b m m ∈𝒫}∖𝒫, and the set of excessive monomials ℰ consisting of monomials from 𝒰 which are neither in ℛ nor in 𝒫, ℰ = 𝒰∖ (ℛ∪𝒫). First we set 𝒰_0 = 𝒰 and ℰ_0 = ∅. We open the loop over the index i starting with i = 1. At the ith iteration we set 𝒰_i = 𝒰_i-1∖ℰ_i-1, ℬ_i = ⋂_b ∈ U_a{m ∈𝒰_i b m ∈𝒰_i}. If ℬ_i = ∅, then the algorithm terminates with the empty set. Otherwise, we proceed ℛ_i = ⋃_b ∈ U_a{b m m ∈ℬ_i}∖ℬ_i, ℰ_i = ℰ_i-1∪𝒰_i ∖ (ℛ_i ∪ℬ_i). Let M be a Macaulay matrix corresponding to F and V be the related monomial vector. We reorder the columns of matrix M and the entries of vector V according to the partition ℰ_i ∪ℛ_i ∪ℬ_i. The resulting Macaulay matrix and the resulting monomial vector, denoted by M_i and V_i respectively, obey the relation M_iV_i = MV. Next, let M_i be the reduced row echelon form of M_i and F_i = {M_i V_i} be the corresponding set of Laurent polynomials. We define the following subset of ℛ_i: ℛ_i = {m ∈ℛ_i m - ∑_j γ_j b_j ∈F_i, γ_j ∈𝕂, b_j ∈ℬ_i}. If ℛ_i = ℛ_i, then we set l = i and terminate the loop over i. Otherwise, we set ℰ_i = ℰ_i ∪ (ℛ_i ∖ℛ_i) and proceed with i+1. The algorithm generates the following sequence of proper subsets 𝒫 = ℬ_1 ⊃ℬ_2 ⊃…⊃ℬ_l-1⊃ℬ_l = ℬ. It follows that the algorithm always terminates in a finite number of steps. By the construction, the resulting subset ℬ is either the empty set or the set satisfying condition (C1). We additionally check if ℬ satisfies condition (C2). If so, the algorithm returns the solving set ℬ. The respective Macaulay matrix M_l is the elimination template. Otherwise, the algorithm returns the empty set. The template test function is summarized in Alg. <ref>. This example demonstrates the work of the template test function from Alg. <ref> on the following set of two Laurent polynomials from ℚ[x^± 1, y^± 1]: F = {f_1, f_2} = {2y^2/x - 7x - 4y + 9, 2x^2/y - 7y - 4x + 9}. The system F = 0 has the following three roots in (ℚ∖{0})^2: (1,1), (-1,2), (2,-1). First, let us show that the 2× 5 Macaulay matrix for the initial system is an elimination template for F the action monomial a = xy. At the first iteration (i = 1) we have [ 𝒰_1 = {x^2y, x, y, y^2x, 1},; ℰ_1 = {1}, ℛ_1 = {x^2y}, ℬ_1 = {x, y, y^2x}. ] The Macaulay matrix of the initial system whose columns are arranged ℰ_1 ∪ℛ_1 ∪ℬ_1 is given by M_1 = 1 x^2y x y y^2x f_1 9 0 -7 -4 2 f_2 9 2 -4 -7 0. The reduced row echelon form of M_1 has the form M_1 = 1 x^2y x y y^2x 1 0 -79 -49 29 0 1 32 -32 -1. The second row implies x^2/y + 3/2 x - 3/2 y - y^2/x = 0, ℛ_1 = ℛ_1. It follows that the matrix M_1 is the elimination template for F a. The set ℬ_1 does satisfy condition (C1) but does not satisfy condition (C2): there is no element b ∈ℬ_1 such that x· b ∈ℬ_1 or y· b ∈ℬ_1. Therefore, none of the two coordinates of a solution can be read off from the eigenvectors of the related action matrix. The algorithm returns the empty set. Now let us consider the set of shifts A· F = {f_2/x, f_2, f_1} and the same action monomial a = xy. At the first iteration (i = 1) we have [ 𝒰_1 = {x^2y, x, y, y^2x, xy, 1, yx, 1x},; ℰ_1 = {1x}, ℛ_1 = {x^2y, xy}, ℬ_1 = {x, y, y^2x, 1, yx}. ] The Macaulay matrix of the expanded system whose columns are arranged ℰ_1 ∪ℛ_1 ∪ℬ_1 is given by M_1 = 1x x^2y xy x y y^2x 1 yx f_2/x 9 0 2 0 0 0 -4 -7 f_2 0 2 0 -4 -7 0 9 0 f_1 0 0 0 -7 -4 2 9 0. The reduced row echelon form of M_1 has the form M_1 = 1x x^2y xy x y y^2x 1 yx 1 0 29 0 0 0 -49 -79 0 1 0 0 -3314 -47 2714 0 0 0 0 1 47 -27 -97 0. The last two rows imply that ℛ_1 = {x^2y}≠ℛ_1 and hence we proceed by setting ℰ_1 = ℰ_1 ∪ (ℛ_1∖ℛ_1) = {xy, 1x}. At the second iteration (i = 2) we have [ 𝒰_2 = 𝒰_1 ∖ℰ_1 = {x^2y, x, y, y^2x, 1, yx},; ℰ_2 = {xy, 1x}, ℛ_2 = {x^2y, 1}, ℬ_2 = {x, y, y^2x, yx}. ] The rearranged Macaulay matrix is given by M_2 = xy 1x x^2y 1 x y y^2x yx f_2/x 2 9 0 -4 0 0 0 -7 f_2 0 0 2 9 -4 -7 0 0 f_1 0 0 0 9 -7 -4 2 0. The reduced row echelon form of M_2 has the form M_2 = xy 1x x^2y 1 x y y^2x yx 1 92 0 0 -149 -89 49 -72 0 0 1 0 32 -32 -1 0 0 0 0 1 -79 -49 29 0. The last two rows imply ℛ_2 = ℛ_2 and hence M_2 is the elimination template for F a = xy. Now the solving set ℬ_2 does satisfy condition (C2) as x·yx∈ℬ_2, y·yx∈ℬ_2. Finally we note that the first two columns of matrix M_2, corresponding to the excessive monomials, are linearly dependent. Removing one of these columns results in the reduced elimination template of size 3× 7. The related action matrix is of order 4, the solver has one redundant root. §.§ Finding template Based on the template test function described in the previous subsection, we propose the algorithm for finding an elimination template for a given set F of s Laurent polynomials. First we define the trivial s-tuple A^0 = ({1}, …, {1}) such that A^0· F = F. We open the loop over the index i starting with i = 1. At the ith iteration we expand the s-tuple A^i-1 = (A^i-1_1, …, A^i-1_s) as follows A^i_j = A^i-1_j ∪{x^± 1· m x ∈ X, m ∈ A^i-1_j} ∀ j. Then we construct the set of shifts A^i· F. For each monomial a ∈ X^-1∪ X, where X^-1 = {x_1^-1, …, x_k^-1}, we evaluate ℬ_i = (A^i· F, a). If ℬ_i ≠∅, then ℬ_i is the solving set and the algorithm terminates with the data a, A_i, ℬ_i required to construct the elimination template. Otherwise, we proceed with the (i+1)th iteration. To ensure that the algorithm terminates in a finite number of steps, we limited iterations to a natural number N. In our experiments we found that for all (tractable) systems it is sufficient to set N = 10. The template finding function is summarized in Alg. <ref>. §.§ Reducing template In general, the template returned by Alg. <ref> may be very large. In this subsection we propose a quite straightforward algorithm for its reduction. Given the s-tuple of sets A = (A_1, …, A_s) and the solving set ℬ, we set A' = A and ℬ' = ℬ. For each j = 1, …, s and m_r ∈ A_j we define the intermediate s-tuple A” = (A'_1, … A'_j-1, A'_j ∖ m_r, A'_j+1, …, A'_s). Then we evaluate ℬ” = (A”· F, a). It may happen that ℬ”≠ℬ. The cardinality of the solving set is allowed to decrease and is not allowed to increase while reduction. Therefore, we set A' = A”, ℬ' = ℬ” if and only if ℬ”≠∅ and #ℬ”≤#ℬ. Then we proceed with the next monomial m_r+1. If r+1 > # A_j, then we proceed with j+1. The template reduction function is summarized in Alg. <ref>. The templates are also reduced by removing all linearly dependent columns corresponding to the excessive monomials as it is described in <cit.>. As a result, our templates always satisfy the “necessary condition of optimality”: # of columns - # of rows = # of roots. Finally, for problems that contain sparse polynomials with all constant coefficients, we applied the Schur complement reduction <cit.>. § EXPERIMENTS In this section, we test our solver generator on 36 minimal problems from geometric computer vision and acoustics. We compare our AG with one of the state-of-the-art AGs from <cit.> (Greedy). The results are presented in Tab. <ref>, and we make the following remarks about them. 1. The experiments were performed on a system with Intel(R) Core(TM) i5-1155G7 @ 2.5 GHz and 8 GB of RAM. 2. In general, the size of a template alone is not an appropriate measure of the efficiency of the corresponding solver. For example, the 5-point absolute pose estimation problem for a known refractive plane (Problem #9) has the templates of sizes 38× 58 and 57× 73. The first template is smaller but is followed by the eigendecomposition of a 20× 20 matrix. On the other hand, the second template is larger but requires the eigendecomposition of a smaller matrix of size 16× 16. At first glance, it is unclear which of these two templates would provide a faster solver. Therefore, to compare the efficiency of the solvers, we reported the template size, the number of roots and the average runtime of the corresponding Matlab <cit.> implementation. The reported times include the derivation of the action matrix and its eigendecomposition and do not include the construction of the coefficient matrix. 3. The numerical error is defined as follows. Let the Laurent polynomial system F = 0 be written in the form M(F) Z = 0, where M(F) and Z = U_F are the Macaulay matrix and monomial vector respectively. The matrix M(F) is normalized so that each its row has unit length. Let d_0 be the number of roots to F = 0 and d ≥ d_0 be the number of roots returned by our solver, there are d - d_0 false roots. Let Z_i be the monomial vector Z evaluated at the ith (possibly false) root. We compute d values ϵ_i = M(F)Z_i/Z_i_2_2, where ·_2 is the Frobenius norm. Then the numerical error for our solvers is measured by the value 1/2log_10∑_i ϵ_i^2, where the sum is taken over d_0 smallest values of ϵ_i. 4. The hard minimal problem of relative pose estimation from 9 lines in 3 uncalibrated images (Problem #23) was first addressed in <cit.> where, using the homotopy continuation method, it was shown that the problem has 36 solutions. In <cit.>, the authors proposed an efficient formulation of the problem consisting of 21 polynomials in 14 variables and first attempted to propose an eigenvalue solver for this problem by constructing a giant elimination template of size 16,278× 13,735. We started with exactly the same formulation as in <cit.>. By applying the G–J elimination on the initial coefficient matrix, we excluded 4 variables resulting in the formulation consisting of 17 polynomials in 10 variables. Our generator found the template of size 2,163× 2,616 with 116 roots in approximately 20 minutes. Then it was reduced to the reported size in approximately 13 hours. 5. Problems #13, #14, #16, #17, #18, #30 contain sparse polynomials with all (or all but one) constant coefficients. We additionally reduced the templates for these problems by the Schur complement reduction, see <cit.> for details. 6. The 2-fold symmetries in the formulations of Problems #25 and #26 were uncovered manually by changing variables. On the other hand, our generator automatically uncovered the partial symmetries for Problems #27 and #28 by constructing the solving set of cardinality less than the degree of the related ideal. 7. The AG from <cit.> applies only to zero-dimensional ideals. Therefore, to apply it to Problems #29–#36, we saturated the positive-dimensional components in their formulations either by the Rabinowitsch trick <cit.>, or by a cascade of G–J eliminations as in <cit.>. The remaining problems were compared using the same formulations. 8. The Maple <cit.> implementation of the new AG, as well as the Matlab <cit.> solvers for all the minimal problems from Tab. <ref>, are made publicly available at https://github.com/martyushev/EliminationTemplateshttps://github.com/martyushev/EliminationTemplates. §.§ Optimal 3-view triangulation The optimal 3-view triangulation problem, first addressed in <cit.>, is formulated as follows. Given three projective camera matrices P_1, P_2, P_3 and image point correspondences x_1 ↔ x_2 ↔ x_3, find the space point X^* so that the reprojection error is minimized. That is X^* = min_X ∑_i = 1^3 (P^1_i X/P^3_i X - x^1_i)^2 + (P^2_i X/P^3_i X - x^2_i)^2, where X = [ x y z 1 ]^⊤, P^j_i is the jth row of P_i and x^j_i is the jth entry of x_i. By choosing an appropriate projective coordinate frame, we can assume that P^3_1 = [ 1 0 0 0 ], P^3_2 = [ 0 1 0 0 ], P^3_3 = [ 0 0 0 1 ], the image plane of the third camera is the plane at infinity. Such parametrization, proposed in <cit.>, leads to smaller templates compared to P^3_3 = [ 0 0 1 0 ] proposed in <cit.>. The optimal solution is one of the 47 stationary points which are found as roots of a system of three Laurent polynomial equations in three variables x, y, z. Unlike previous work, our generator is able to work directly with the Laurent polynomial formulation. The problem has been extensively studied <cit.>. The solvers from <cit.> are currently the state-of-the-art. In Fig. <ref>, we show the support U_F of the initial system as well as the solving set ℬ with #ℬ = 58. The related elimination template is of size 69× 127, Tab. <ref>, Problem #35. We tested the new solver on synthetic scenes. We modeled a 3D point X lying in a cube with edge of length 1 centered at the coordinate origin. The point is viewed by three cameras. The centers c_i (here and below i = 1,2,3) of the cameras randomly lie on a sphere of radius 1 also centered at the origin. The three rotation matrices R_i are chosen randomly and the calibration matrices K_i all have the focal length and the principal point approximately 1,000 and (500, 500) respectively. The initial data for our solver are the three camera matrices P_i = K_i [ R_i -R_ic_i ] and the projections x_i = P_i X normalized so that x_i^3 = 1. We tested the numerical accuracy of our solver by constructing the distribution of the errors in 3D placement on noise-free image data. We kept the real roots, including false ones, and then picked out the unique root by calculating the reprojection errors. The 3D placement error distributions for 10K trials are compared in Fig. <ref>. The speed and the failure rate of the solvers are compared in Tab. <ref>. §.§ Semi-generalized hybrid relative pose: Lg Consider the problem of registering a partially calibrated pinhole camera P (with unknown focal length f) a generalized camera G from a hybrid set of point correspondences, one 2D-2D correspondence p_1 ↔ (q_11, t_g_1) and three 2D-3D correspondences p_j ↔ X_j, j=1, …, 3. The generalized camera G is considered as a set of multiple pinhole cameras, {G_i}, which have been registered a global coordinate frame. The goal is to estimate the relative pose, the rotation R and the translation T, required to align the coordinate frame of P to the global coordinate frame, as well as its focal length f. This problem was studied in <cit.> using a homography matrix-based formulation, leading to a system of two degree-3, one degree-4 and three degree-8 polynomials in three variables. Using <cit.> led to a minimal solver with a template of size 70 × 82 with 12 roots. However, the polynomial coefficients are quite complicated, resulting in an inefficient execution time of 55 ms. Instead, we generated a more efficient solver using a depth-based problem formulation. The pose and the focal length are constrained via the following equations: α_1 R K^-1 p_1 + T = β_11 q_11 + t_g_1, α_j R K^-1 p_j + T = X_j, j = 2, …, 4, where K = diag([f, f, 1]) is the calibration matrix for P, α_j and β_ij denote the depths of the jth 3D point in the coordinate frames of P and G_i respectively. Without loss of generality, we transform the coordinate frame of G such that its origin coincides with the camera center of G_1, t_g_1 = [ 0 0 0 ]^⊤. For the sake of brevity, assume X_1 = β_11 q_11 in Eq. (<ref>). Eliminating T from Eq. (<ref>) gives the following equations: R K^-1 (α_i_1 p_i_1 - α_i_2 p_i_2) = X_i_1 - X_i_2, where 1 ≤ i_1 ≤ 4, i_1 < i_2≤ 4. For the sake of brevity, assume Y_i_1,i_2 = α_i_1 p_i_1 - α_i_2 p_i_2 and X_i_1,i_2 = X_i_1 - X_i_2. Eliminating R from Eq. (<ref>) yields K^-1 Y_i_1,i_2_2^2 = X_i_1,i_2_2^2, Y_i_1,i_2^⊤ K^-2 Y_i_3,i_4 = X_i_1,i_2^⊤ X_i_3,i_4, where 1≤ i_1, i_3≤ 4, i_1 < i_2≤ 4, i_3 < i_4≤ 4, (i_1,i_2)≠ (i_3,i_4). Equation (<ref>) denotes the depth formulation for the minimal problem and consists of 20 Laurent polynomials in 6 variables viz., α_1, α_2, α_3, α_4, β_11, f. The depth formulation tends to induce polynomials in more variables, but with much simpler coefficients, than those resulting from the homography formulation. The effect is primarily observed in the execution times of the minimal solvers based on the proposed formulation versus the homography-based formulation. Table <ref> (Row 1) shows the average time taken/call, measured for both the proposed and the SOTA homography-based solvers. We also evaluated the numerical performance of the proposed depth-based solver for synthetic scenes. For this purpose, we generated 5K 3D scenes with known ground truth parameters. In each scene, the 3D points were randomly distributed within a cube of dimensions 10 × 10 × 10 units. Note that for the 𝐇13f case, there is only one 2D-2D point correspondence. Therefore, each 3D point was projected into two pinhole cameras with realistic focal lengths. One camera acts as a query camera, P, which has to be registered, while the other camera represents the generalized camera G (consisting of only one pinhole camera). The orientations and positions of the cameras were randomly chosen so that they looked at the origin from a random distance of 15 to 25 units from the scene. The simulated images had a resolution of 1,000× 1,000 pixels. The failure rate for focal length estimation is reported in Tab. <ref> (Row 3 and Row 4). Note that the proposed solver has a lower or comparable failure rate than the SOTA homography-based minimal solvers <cit.> generated using the Gröbner basis <cit.> and the resultant <cit.>. At the same time, the proposed solver is 20 to 30 times faster than the two SOTA solvers. We also evaluated the solver performance in the presence of noisy scene points by introducing Gaussian noise into the coordinates of the 3D points sampled in the synthetic scene. The standard deviation of the noise was varied as a percentage of their depths, to simulate the different quality of the keypoints used to triangulate these 3D points. We also introduced 0.5 image noise to simulate noisy feature detection. For such a scene setup, we evaluated the stability of the proposed depth-based minimal solver against the SOTA homography-based minimal solvers in <cit.> using the methods based on Gröbner bases and resultants. Figure <ref> shows the error in focal length estimated by the solvers. Here, the box plots show the 25% to 75% quantiles as boxes with a horizontal line for the median. We note that our proposed depth-based solver has fewer errors, even with increasing noise in the 3D points, compared to the homography-based solvers from <cit.>. §.§ Time-of-Arrival self-calibration The Time-of-Arrival (ToA) (m,n) problem is formulated as follows. Given m× n distance measurements d_ij, i = 1, …, m, j = 1, …, n, find m points s_i (senders) and n points r_j (receivers) in 3-space such that d(s_i, r_j) = d_ij for all i,j. Here d(x, y) = x - y_2 is the distance function. All the points (senders and receivers) are assumed to be in general position in space. Clearly, any solution to the ToA problem can be only found up to an arbitrary Euclidean isometry. In the real world, the ToA problem arises from measuring the absolute travel times from unknown senders (speakers) to unknown receivers (microphones). If the signal speed is known, then the distances between the senders and receivers are also known, and we arrive at the ToA problem. The ToA (4,6) and (5,5) problems are minimal and have up to 38 and 42 solutions respectively. These problems have been studied in papers <cit.>. The solvers from <cit.> are currently the state-of-the-art. We used the ToA problem parametrization proposed in <cit.>. The (4,6) problem is formulated as a system of four polynomials of degree 3 and one of degree 4 in 5 unknowns. The related affine variety is the union of two subvarieties of dimensions 1 and 0. The 1-dimensional component consists of superfluous roots that have no feasible interpretation, while the 0-dimensional component consists of 38 feasible (complex) solutions to the problem. Similarly, the (5,5) problem is formulated as a system of five polynomials of degree 3 and one of degree 4 in 6 unknowns. The related variety is the union of a 2-dimensional “superfluous” subvariety and a 0-dimensional component consisting of 42 complex roots. Our generator automatically found the redundant solving sets of cardinality 48 for the (4,6) problem and of cardinality 60 for the (5,5) problem. The respective elimination templates are of size 427× 475 and 772× 832, see Tab. <ref>, Problems #31 and #32. We tested the new solvers on synthetic scenes. We modeled m senders and n receivers uniformly distributed in a cube with edge of length 1. The ground truth positions of the receivers and senders are 3-vectors s_i and r_j, respectively. The initial data for our solvers are the m× n distances d(s_i, r_j) for all i = 1, …, m, j = 1, …, n. We tested the numerical stability of the solvers on noise-free data by measuring the following error: ϵ = min(∑_k > i (d(s_i, s_k) - d(ŝ_i, ŝ_k) )^2 + ∑_l > j (d(r_j, r_l) - d(r̂_j, r̂_l) )^2)^1/2, where ŝ_i and r̂_j are the estimated positions of the senders and receivers and the minimum is taken over all real roots. The results are presented in Fig. <ref>. The speed and the failure rate of the solvers are compared in Tab. <ref>. § CONCLUSION In this paper, we have proposed a new algorithm for automatically generating small and stable elimination templates for solving Laurent polynomial systems. The proposed automatic generator is flexible, versatile, and easy-to-use. It is applicable to polynomial ideals with positive-dimensional components. It is also useful for automatically uncovering the partial p-fold symmetries, thereby leading to smaller templates. Using the proposed automatic generator, we have been able to generate state-of-the-art elimination templates for many minimal problems, leading to substantial improvement in the solver performance. § ACKNOWLEDGMENTS § ACKNOWLEDGMENT Snehal Bhayani has been supported by a grant from the Finnish Foundation for Technology Promotion. T. Pajdla was supported by EU H2020 SPRING No. 871245 project. amsplain
http://arxiv.org/abs/2307.02059v1
20230705065440
Noise Decoupling for State Transfer in Continuous Variable Systems
[ "Fattah Sakuldee", "Behnam Tonekaboni" ]
quant-ph
[ "quant-ph", "math-ph", "math.MP" ]
[email protected] The International Centre for Theory of Quantum Technologies, University of Gdańsk, Jana Bażyńskiego 1A, 80-309 Gdańsk, Poland [email protected] Quantum Systems, Data61, CSIRO, Clayton, Victoria 3168, Australia We consider a toy model of noise channels, given by a random mixture of unitary operations, for state transfer problems with continuous variables. Assuming that the path between the transmitter node and the receiver node can be intervened, we propose a noise decoupling protocol to manipulate the noise channels generated by linear and quadratic polynomials of creation and annihilation operators, to achieve an identity channel, hence the term noise decoupling. For random constant noise, the target state can be recovered while for the general noise profile, the decoupling can be done when the interventions are fast compared to the noise. We show that the state at the transmitter can be written as a convolution of the target state and a filter function characterizing the noise and the manipulation scheme. We also briefly discuss that a similar analysis can be extended to the case of higher-order polynomial generators. Finally, we demonstrate the protocols by numerical calculations. Noise Decoupling for State Transfer in Continuous Variable Systems Behnam Tonekaboni 0000-0003-4258-3139 August 1, 2023 ================================================================== § INTRODUCTION Utilizing continuous variables (CV) as tools for quantum computing and manipulations has become one of the promising areas of research in the quantum information community in the past decade <cit.>. There are several currently active research aspects in this area, for instance, quantum key distribution <cit.>, entanglement and resources theory <cit.>, quantum metrology and states discrimination <cit.>, and quantum communication and state transfers <cit.>, and noise analysis for manipulation of CV systems <cit.>. Despite the different perspectives, there is still room for development and detailed investigation. For example, it appears that several techniques used in quantum manipulation on finite systems are not translated into a set of tools for controlling CV systems. One of the interesting topics in this picture is the application of noise decoupling and control normally used in solid-state physics <cit.> to CV systems. The question is how much one can adapt the tools for noise control from finite systems to CV systems? This plays an important role in engineering the preparation, measurement, and also transfer of information encrypted state in a robust way. In this article, we partially contribute by considering a state transfer problem via noisy channels and introducing a decoupling protocol to gain noiseless channels. The noise decoupling protocol is a technique of insertion of a series of control operations that interleave a given noisy channel in a sequence to subtract the contribution of noise and obtain clean channels. It is well known under the name dynamical decoupling where the noise is the influence of the environment on the interesting system via a unitary evolution <cit.>. We consider a similar setup with different parameterizations, i.e. using a path distance as a dynamical parameter. For illustration, one can consider communication using photons through fiber or free space, in which the information is encrypted in a continuous degree of freedom of the carrier light <cit.>. For instance, the noise in the first scenario can be phase noise influenced by the medium <cit.>, and the interventions here are simply repeater nodes inserted along the path. For free space communication, even though one cannot intervene between transmitter and receiver nodes, e.g. between the ground station and a satellite, a similar picture can be drawn as a protocol of multi-step forward and backward transmissions, in which the first transmitter and the last receiver are treated as first and final nodes of communications while the intermediate steps are interventions. In these two scenarios, one can expect a protocol of applying control operations at those interventions and obtain a noise decoupling scheme for the CV state transfer. The aim of this article is to find relevant control sets for the simplification models of the aforementioned problem. The article is organized as follows. Formulations for CV systems and the model of the noise channels are discussed in Sec. <ref>. In Sec. <ref> we recall the basic description of the dynamical decoupling protocol and we discuss the translation of such protocol in our system. The decoupling protocol is elaborated in detail for the noisy displacement channels—random noise channels with creation and annihilation generators—both in the filter description in Sec. <ref> and in control group averaging picture in Sec. <ref>. We further discuss the quadratic generators and briefly discuss the higher-order generators in Sec. <ref>. Numerical illustrations of the protocol are given in Sec. <ref>, and finally, the conclusion is given in Sec. <ref>. § FRAMEWORK In this section we will introduce the set-up of our systems and the noise models. We begin with the overview of the noise model and then discuss the mathematical framework and the formulation for the noise in our model. We recall the Wigner representation and the counterparts of the elementary noise operations on the phase space. §.§ Noisy Transfer Channels The basic states transfer (or communication) scheme for quantum information is a protocol to send a state ρ on a given Hilbert space ℋ, from one node (transmitter) to another (receiver). The channel through which the transfer has been done is practically modeled as a completely positive and trace preserving (CPTP) transformation 𝒞 on a set of bounded operators ℬℋ and the received state can be written as 𝒞ρ. In the ideal scenario, such a channel is known and the information inside ρ can be extracted from the received state by standard state tomography <cit.>, or parameters characterization when only partial quantities are relevant. Hence the received state contains the same amount of information as the original one. However, when the communication involves disturbance from the environment or the imperfection of the medium encompassing the channel, the channel becomes partially unknown, resulting in a noisy channel. The simplest model for these channels is 𝒞_λ⃗=ℰ_λ⃗∘𝒞, where ℰ_λ⃗ is a noise operation, a stochastic CPTP map on ℬℋ, characterized by a parameter vector λ⃗. For simplicity, we model the noise operation as a random unitary operation equipped with a probability space (Ω,μ) in which the map can be represented by ℰ_λ⃗ = ∫_Ω dμω_λ⃗ℰ_ω_λ⃗ = ∫_Ω dμω_λ⃗𝒰_ω_λ⃗ where 𝒰_ω_λ⃗=U_ω_λ⃗·U_ω_λ⃗^†, and U_ω_λ⃗ is the unitary element for the map ℰ_λ⃗ (more on that in the next section). The parameter λ⃗ denotes a path from the transmitter to the receiver and ω_λ⃗ is a particular noise configuration for a given path λ⃗. The main task of noise suppression is to modify the noisy channel 𝒞_λ⃗ in order to achieve the noiseless channel 𝒞. For simplicity, let the channel 𝒞 be absorbed in the state ρ and the parameter λ⃗ is one dimensional and denoted by λ. The problem becomes equivalent to manipulating the noise operation ℰ_λ to approach an identity channel ℐ. In this picture, one of the simplest examples is a communication wire where the parameter λ associates the distance from the transmitter node towards the receiver node, and the ℰ_λ is noise at the given position on the transfer path from the origin. In the next section, we will consider the structure of this operator in detail. §.§ Continuous Variables Systems and Noise Operations In this section we will recall the framework on which we are working in this article. Particularly, we first introduce the continuous variable systems that we are interested in, namely optical systems, and then equipped with that introduce the noise model. §.§.§ Harmonic oscillator as continues variable system Let ℋ be a Hilbert space of square-integrable complex-valued functions on a real line, i.e., ℋ=L^2ℝ=ψ:ℝ→ℂ: ∫_ℝψx^2dx < ∞. Following conventions in quantum mechanics of continuous variables, we define position x and momentum p operators on the Hilbert space ℋ as multiplicative xψx=xψx and derivative operators pψx=-i∂_xψx respectively. The position and momentum operators follow canonical commutation relation x,p:=xp-px=i, where i is the unit of imaginary numbers. The Hilbert space ℋ can be spanned by two sets of bases; a continuous one δx: x∈ℝ and a discrete set of states ϕ_nx:n=0,1,…, where H_harϕ_nx=n+1/2ϕ_nx defines eigenstates of a harmonic oscillator H_har=1/2(x^2+p^2). Using the latter basis, one also defines annihilation operator a and creation operator a^† where aϕ_n=√(n)ϕ_n-1 for n>0 and aϕ_0=0 (a vacuum state) and a^† is its Hermitian conjugate. The canonical commutation will then be a,a^†=1. We also use a Dirac notation for the states and the dual states, i.e. |ψ⟩ represents the function ψ and |n⟩ represents ϕ_n for n=0,1,…. Furthermore, We define displacement operator Dα=expαa^† - α^*a. Similarly, we define squeeze operator Sz=expz^*a^2-za^†^2/2. Note that the displacement parameter α, is a complex number where the real part is to displace in position space and the imaginary part is to displace in momentum space. Similarly, the squeezing parameter z, is a complex number. Here, for the sake of simplicity and without loss of generality, we will mainly discuss real positive squeezing parameters. On the operation level we also write 𝒟α=Dα·D^†α and 𝒮z=Sz·S^†z. §.§.§ Noise model In this work, we consider that the noise is a combination of random displacement and random squeezing. To model this type of noise we set ω_λ a mapping ω_λ:ℓ↦α_ℓ,z_ℓ∈ℂ^2 for ℓ lying on the path λ. We then consider the unitary elements for the map ℰ_λ of the form U_ω_λ = 𝒯_λexp[∫_0^λGω_λℓdℓ] = 𝒯_λexp[∫_0^λGα_ℓ,z_ℓdℓ] , where λ denotes a path length, G α_ℓ,z_ℓ = [12z^*_ℓa^2-z_ℓa^†^2+α_ℓa^† - α^*_ℓa] , for a position ℓ on the path λ, and 𝒯_λ is an ordering operator with respect to the path λ defined by 𝒯_λ[Gω_λℓGω_λℓ'] = {[ Gω_λℓGω_λℓ' , ℓ≤ℓ'; Gω_λℓ'Gω_λℓ, ℓ'<ℓ. ]. For a simple communication wire, one can consider variables x,p for the transverse position of a photon carrying a message, and the parameter λ indicates the position on the propagating (longitudinal) direction. The action generated by Gα_ℓ,z_ℓ is a contribution of the noise at the longitudinal position ℓ from the propagating origin, and hence ℰ_ω_λ represents the accumulation of noise during the propagation up to the longitudinal position λ on the wire, where the effects of squeezing, translation, and attenuation types are calculated for the transverse position x,p for such λ. We will revisit this example after we discuss the manipulation. §.§ Wigner Representation and Gaussian Manipulation Framework From the formulation we only have Gaussian noise operations, operations whose generators are at most in the second order of the creation and annihilation operators. We further assume that the state ρ is also given by a mixture of coherent states. Such states are Gaussian states, and the states after noise operations are also Gaussian states. It then becomes useful to consider here Wigner functions <cit.>, where both coherent states (and their mixtures) and Gaussian operations (and their averages) can be illustrated as simple functions and their transformations on phase space. In particular, we express a density matrix ρ as W_ρx,p = 1/2π∫_-∞^∞ dy e^-ipy⟨x+y2|ρ|x-y2⟩. We also write the function with a single complex argument W_ρα=W_ρx,p where α=x+ip√(2). Remark that the Wigner function is normalised, i.e. ∫ dx ∫ dp  W_ρx,p =1, but may be not positive. In parallel, we employ a Wigner transformation for the operator (a dual Wigner function), namely W^Gx,p=∫_-∞^∞ dy e^ipy⟨x-y2|G|x+y2⟩=W^Gα, for the operator G. Here the expectation value of the operator G reads ⟨G⟩_ρ =Gρ=∫ dx∫ dp  W^Gx,pW_ρx,p = ∫ dα∫ dα^*  W^GαW_ρα. For illustration, let us consider a mixture of coherent states ρ ρ = ∑_rp_rρ_r, where ∑_rp_r=1, p_r>0, and ρ_r=|θ_r⟩⟨θ_r|, |θ_r⟩=∫_-∞^∞ dx ψ_σ_rx-θ_r|x⟩, ψ_σ_r(x-θ_r)=(1/2πσ_r^2)^1/4e^-x-θ_r^24σ_r^2, with θ_r on the phase space indicating the center of the Gaussian state ρ_r equipped with spreading parameter σ_r^2. The function Eq. (<ref>) is simply a Gaussian function with various centers and variances. Its Wigner function reads W_θx,p =∑_rp_rW_rx,p, W_rx,p := 1πe^-x-θ_r^2/2σ_r^2e^-2σ_r^2p^2. In this representation, both displacement and squeeze operators, which are the element of noise operators, can be made associated to transformations of Wigner functions on phase space. For displacement operator Dα, one can define an associate transformation on phase space T_Dα as T_Dα'W_ρα:=W_𝒟α'ρα=W_ρα+α', and similarly the squeeze operator Sz can be associated with T_Sz via T_Sz'W_ρα:=W_𝒮z'ρα. The exact form of the latter operation is complicated in general, but one of the comprehensive cases is the case when the squeezing parameter z is positive real z=z, in which the transformation corresponds to scaling of the phase space. In particular T_SγW_ρx,p=W_𝒮γρx,p = W_ρe^-γ x,e^γp, where γ is a positive number. The squeeze operators and their transformations are additive in the squeezing parameters, i.e., T_Sγ_1T_Sγ_2=T_Sγ_1+γ_2 for γ_1 and γ_2 real positive. For complex squeezing parameters, one can re-parametrize the squeeze operator into Sz=S_czγ=exp[γ2cz^2-c^†z^2] , where we assume a Bogoliubov-like transformation cz=√(t)a+√(r)a^†e^iθ, t,r>0 and t+r=1, and we write z=γt-re^iθ with γ>0. The corresponding transformation on the phase space similar to Eq. (<ref>) can also be achieved for real coordinates associated with the new operators cz and c^†z. However, the additivity property does not hold for the complex parameter. This can be easily seen from the dependency of the parameters in the definition of cz and c^†z, which can be different for different sections of the path. Until now one can observe that in some expressions the representation in real coordinate x,p is more transparent than the complex coordinate α and vice versa. Hence, for convenience, we will use real coordinate x,p and complex coordinate α interchangeably throughout this paper. § NOISE DECOUPLING PROTOCOLS FOR DISPLACEMENT NOISE Here we recall the general ideas of dynamical decoupling (DD) protocol and briefly discuss the connection to our set-up. We then elaborate on the mechanism for inserting the control operations in our toy model. We discuss the formalism for the displacement noise as manipulation on phase space and then we conclude by revisiting the DD descriptions for that case. §.§ Dynamical Decoupling Scheme Dynamical decoupling protocols <cit.> is a well-known control tool for manipulation of signal generated or passed through a system interacting with an environment. The principle is to interlace the dynamical evolution by additional local cyclic unitary operators, leading to an effective Hamiltonian where the system is decoupled from the environment. In particular, define a system 𝒮 governed by a couple evolution to an environment ℬ, U_0t=e^-itH_0, where H_0=H_S⊗+⊗H_B+H_SB=∑_αS_α⊗B_α and set an initial state as ρ0=ρ_S0⊗ρ_B0. Now define a control set 𝒞_S⊂ℬℋ_S — a set of bounded operators on the system contains H_1t such that U_1t=𝒯exp-i∫_0^tduH_1u=U_1t+T_C of a period T_C, where 𝒯 is a time order operation. It is proven that, for arbitrary interaction H_SB, by putting an arbitrary fast control into the reduced evolution the average of an observable A∈𝒞_S can be given by <cit.> lim_N→∞Aρ_SNT_C = Aρ_S0, where ρ_St=_BU_tottρ0U^†_tott, and the controlled couple evolution reads U_tott=𝒯exp-i∫_0^tduHu with Ht=∑_αU^†_1tS_αU_1t⊗B_α. If the control set can be arbitrary 𝒞_S=ℬℋ_S, it can be restated as lim_N→ Nρ_SNT_C=ρ_S0. Furthermore, for the case when the interaction space ℐ_S induced by the terms ⊗H_B and H_SB in the Hamiltonian is known, the control such that 𝒯∫_0^T_CduU^†_1uℐ_SU_1u=0 will lead to lim_N→∞ρ_SNT_C=e^-iNT_CH_Sρ_S0e^iNT_CH_S for some local Hamiltonian H_S on the system. The procedure above is enabled by two mechanisms. First is the lowest-order Magnus approximation U_totNT_C≈exp-iNT_CH_0= exp-iN∫_0^T_CduHu which can be achieved within the given limit. Second, is the symmetry of the average induced by the control set 𝒞_S. This can be seen from writing U_1t = g_j for t∈[jΔ t,(j+1)Δ t) and Δ t =T_C/𝒢, where 𝒢=g_j denote a finite group of unitary operators generating 𝒞_S. The lowest order Hamiltonian can then be written as H_0 = ∑_αS_α⊗B_α =∑_α(1𝒢∑_jg^†_jS_αg_j)⊗B_α. The group average of an individual term S_α commutes with 𝒢 and hence 𝒞_S. The dynamical decoupling argument Eq. (<ref>) then follows automatically. Similarly, the special case, Eq. (<ref>) can be re-expressed in term of group average as 1𝒢∑_jg^†_jℐ_Sg_j=0 , being a condition for decoupling in Eq. (<ref>). For CV systems, there are only a few studies about the application of dynamical decoupling thereon. One of them is done by Vitali and Tombesi in Ref. <cit.> where they considered a unitary evolution of two oscillators. It is observed that such evolution can be reversed by a perturbation of a local parity operator on phase space, in the sense the perturbed Hamiltonian is the original Hamiltonian with a minus sign. Another recent work by Arenz et. al. <cit.> contains a detailed description of the generic dynamical protocol for CV systems coupled to another CV system, generalizing the former observation. It is also stated that in general, unlike the finite dimension cases, for arbitrary Hamiltonian, one cannot find a control group/set such that the average with respect to it leads to a complete decoupling similar to condition Eq. (<ref>). In other words, a control group/set, if there exists, must be designed according to the Hamiltonians, and only some dynamics can be decoupled. In our case, even though the noise is not described by dynamical evolution, it shares a similar structure to the open dynamics above, where in this case the time parameter t is replaced by the path parameter ℓ. The noise in our is considered classical in this perspective since it is represented by a stochastic function depicted in Eq. (<ref>). In the following, we will elaborate on this picture from the most straightforward scenario where the noise is solely equipped with displacement operators; then we will consider the more complex case of squeezing noise; and finally, we will discuss our problem concerning the noise operator of the compound form Eq. (<ref>). To do so we need to consider an intervention scenario in our setup. §.§ Truncations of Noise Channels and Interventions upon Them Before exploring the manipulation scheme, let us first discuss the physical picture of intervention in our problem. Recall the unitary noise operator U_ω_λ, for a particular noise configuration ω_λ. Let us define a two-point propagator between point ℓ to point ℓ' on the path λ as U_ω_λ;ℓ':ℓ. As previously discussed, the path λ can be thought of as a transferring path across the communication channel, it is then interesting to consider an additivity with respect to the path parameter U_ω_λ = U_ω_λ;λ:0 = U_ω_λ;λ:ℓ∘U_ω_λ;ℓ:0, where 0 ≤ℓ≤λ and U_ω;0:0=. These relations represent the truncation of the channel, for a particular noise configuration ω_λ with respect to the path λ. One of the simple case for the generator U_ω_λ;λ:0=𝒯_λ e^G_ω_λ;ℓ':ℓ satisfying the conditions above is G_ω_λ;ℓ':ℓ=∫_ℓ^ℓ'G_ω_λℓ dℓ for some anti-Hermitian operator G_ω_λℓ. The truncation Eq. (<ref>) is possible by the presence of 𝒯_λ, which can be removed when commutativity G_ω_λℓ,G_ω_λℓ'=0 holds for 0≤ℓ,ℓ'≤λ. These are the special cases of displacement noise (z_ℓ=0) or real squeeze noise (α_ℓ=0 and z=0). In such two cases, the mentioned divisibility is related to the additivity of displacement operators or squeeze operators with positive real parameters. For more general noise beyond these simple cases, one can remove the symbol 𝒯_λ with the help of additional techniques and assumptions, e.g. Trotter product-like approximation and limiting process, which will be discussed later. By repeating the same mechanism for n steps, we obtain U_ω_λ=U_ω_λ;ℓ_n:ℓ_n-1∘U_ω_λ;ℓ_n-1:ℓ_n-2∘⋯∘U_ω_λ;ℓ_1:0, for λ=:ℓ_n>ℓ_n-1>…>ℓ_1>0, one can define a stochastic process ω⃗=ω_n,…,ω_1 such that G_ω_k:=G_ω;ℓ_k:ℓ_k-1=∫_ℓ_k-1^ℓ_kG_ωℓ dℓ. Let U_ω_k=U_ω;ℓ_k:ℓ_k-1=𝒯_λ e^G_ω_k and 𝒰_ω_k=U_ω_k·U^†_ω_k the associate operation. The expression Eq. (<ref>) can then be re-expressed as ℰ_λ = ∫_Ω dμω_λℰ_ω_λ = ∫_Ω dμ̃ω⃗∏_k=n^1𝒰_ω_k, where μ̃ is the distribution of ω⃗ on a sample space Ω induced by μ on original space Ω. Now let us mention a crucial characteristic of the sequential mechanism, namely noise auto-correlation. This corresponds physically to the memory registered in the medium, creating conditions for the noise realization ω_k on the previous realizations ω_k' for k'<k. With the structure given above, we can consider Eq. (<ref>) and discuss a manipulation there on. We aim to perform some engineering over intermediate processes to shape the Eq. (<ref>) to achieve a noiseless one. In general, we transcribe such mechanism as additional CPTP maps 𝒜_k=∑_γ_kÂ_γ_k·Â^†_γ_k inserted after the action 𝒰_ω_k at each time step 0≤ k≤ n and the element ∏_k=n^1𝒰_ω_k will be modified to 𝒜_n∘𝒰_ω_n∘𝒜_n-1∘𝒰_ω_n-1∘⋯∘𝒜_1∘𝒰_ω_1∘𝒜_0. With these at hand, the noise canceling procedure problem is to choose the proper set of operations 𝒜_k (aka control sequence) such that one can approximate the resulting operation Eq. (<ref>) to an identity operation. In the following, we will consider possible control sequences for our particular noise model Eq. (<ref>), by beginning with a special case of displacement noise, followed by real squeezing noise and an approximation for the general noise from such model. §.§ Modification of the Displacement Noise First let us consider the noise when there is only the displacement part in the element Eq. (<ref>), i.e. z=0. Here the elements for the operations Eq. (<ref>) of length n reads 𝒜_n∘𝒟α_n∘𝒜_n-1∘𝒟α_n-1∘⋯∘𝒜_1∘𝒟α_1∘𝒜_0, where α_k corresponds to the process element ω_k. Now define ℳ_k and 𝒩_k via 𝒜_k=ℳ_k∘𝒩_k for 2≤ k≤ n-1. We choose the modification operations such that 𝒩_k∘𝒟α_k∘ℳ_k-1 = 𝒟s_kα_k, for 2≤ k≤ n-1, and 𝒩_1∘𝒟α_1∘𝒜_0 = 𝒟s_1α_1, 𝒜_n∘𝒟α_n∘ℳ_n-1 = 𝒟s_nα_n where s_k=-1^k for 1≤ k≤ n. It follows that 𝒟s_nα_n∘𝒟s_n-1α_n-1∘⋯∘𝒟s_1α_1=𝒟(∑_ks_kα_k). The state after the noise channel will read 𝔼ℰ_α_λρ = 𝔼[𝒟(∑_ks_kα_k)] ρ, where 𝔼X=∫ X  dμ̃α_1,…,α_n is a short-hand notation for the average. Now let us revisit the Wigner representation on phase space. For a state ρ with a Wigner representation W_ρ, one can have T_𝒟(∑_ks_kα_k)W_ρα_0 = W_ρ(α_0+∑_ks_kα_k) , where the last average is expected to be W_ρα_0 by our manipulation. We define a 2-dimensional Fourier transform on the phase space 𝔉fβ =𝔉fξ,ζ = 12π∫_-∞^∞ dx∫_-∞^∞ dp e^iβ^Tαfα, = 12π∫_-∞^∞ dx∫_-∞^∞ dp e^ixξ+pζfx,p, 𝔉^-1 denotes its inverse transform, and we write β^Tα=xξ+pζ a scalar product of real vector representations of two complex numbers α=x  p^T≃ x+ip and β=ξ  ζ^T≃ξ+iζ, i.e., β^Tα=βα^*+βα/2 in terms of complex numbers operations. Also, f∗ gα=∫ dα'∫ dα'^* fα'gα-α' denotes a (2-dimensional) convolution between functions f and g. Here the state at the receiver becomes 𝔼[   W_ρ(α_0+∑_ks_kα_k)] = 𝔼[𝔉^-1( e^-iβ^T_0∑_ks_kα_kW_ρβ_0)] = 12π[f_n∗ W_ρ]α_0, where f_nα_0 = 𝔼[𝔉^-1( e^-iβ^T_0∑_ks_kα_k)] , ĝβ=𝔉gβ=𝔉gα, and ǧα=𝔉^-1gα=𝔉^-1gβ. From Eq. (<ref>) one can interpret the function f_n as a noise filter function where the convolution inside represents a deformation of the target object by the noise and modification we implemented. At this point let us remark two directions one can handle this issue. First, since the condition for which the perfect noise suppression can be written as f_nα=2πδα=2πδxδp, the Dirac distribution on the phase space, the direct approach is to modulate the filter function to achieve such a condition. Indeed one can also consider various sequences s_1,…,s_n, and manipulation thereon, and attempt to modify the convolution function to achieve or at least approximate the delta distribution. Another direction is that, instead of engineering a noise cancellation protocol, one can use the scheme above as a noise spectroscopy, by injecting a known target state ρ and employing availability to adjust s_1,…,s_n, and construct a collection of filter functions f_s_1,…,s_n. Later one can use such a function as a template for the reconstruction of an unknown target state in the original problem. We left the second perspective for further investigation and discuss only the first scenario. To demonstrate the procedure of noise suppression, we consider an important class of noise distributions e.g. Gaussian noise, where every moment can be characterized by first and second moments <cit.>. Furthermore, without loss of generality, we assume that the distribution μ is balanced in such a way that 𝔼α_k = 0 for k=1,…,n, hence all odd moments are also 0. It follows that f_n α_0= 𝔼[𝔉^-1( e^-iβ^T_0∑_ks_kα_k)] = 𝔉^-1( exp{ -1/2∑_kk' s_ks_k'𝔼[β^T_0α_kβ^T_0α_k']}) , = 𝔉^-1[ exp( -1/2β^T_0Σ_nβ_0)] , where Σ_n =([ A_n C_n; C_n B_n ]), A_n = 14∫_0^λdℓ∫_0^λdℓ' F_nℓF_nℓ' ×𝔼(α_ℓα_ℓ'+α^*_ℓα_ℓ'+α_ℓα^*_ℓ'+α_ℓα_ℓ') =∫_0^λdℓ∫_0^λdℓ' F_nℓF_nℓ'𝔼x_ℓ x_ℓ', B_n = -14∫_0^λdℓ∫_0^λdℓ' F_nℓF_nℓ' ×𝔼(α_ℓα_ℓ'-α^*_ℓα_ℓ'-α_ℓα^*_ℓ'+α_ℓα_ℓ') =∫_0^λdℓ∫_0^λdℓ' F_nℓF_nℓ'𝔼p_ℓ p_ℓ', C_n = -i4∫_0^λdℓ∫_0^λdℓ' F_nℓF_nℓ'𝔼(α_ℓα_ℓ'-α_ℓα_ℓ') = 12∫_0^λdℓ∫_0^λdℓ' F_nℓF_nℓ'𝔼(x_ℓp_ℓ'+p_ℓx_ℓ'), and F_nℓ = {[ -1, ℓ_k-1<ℓ≤ℓ_k,  k is odd,; 1, ℓ_k-1<ℓ≤ℓ_k,  k is even.; ]. Suppose that the covariance matrix Σ_n can be made positive definite, the filter function can be reduced to <cit.> f_nα_0= 12π√(Σ_n)exp[ -1/2α^T_0Σ_n^-1α_0] . With this form at hand, it is transparent that to approximate the perfect decoupling form the noise Eq. (<ref>), one needs to modulate the filter in such a way that Σ_n becomes smaller as n increases, providing a Gaussian smearing for the ideal filter Eq. (<ref>). We will illustrate this mechanism when we discuss numerical examples in Sec. <ref>. In the following, we revisit the noise decoupling scheme introduced in Sec. <ref>, and show that the principle behind our proposed protocol is simply the same as in the dynamical decoupling scheme. §.§ Noise Decoupling Perspective Here let us discuss the noise decoupling for the manipulation scheme above. In fact, the introduction of the intervention operations Eq. (<ref>) is not arbitrary and the ability to modulate noise to nullity is based on the principle of noise decoupling introduced in Sec. <ref>. Note that for the displacement noise 𝒟α, the noise space ℐ_S (previously called interaction space) is generated by operators x and p (or equivalently a and a^†). It is clear that, by definitions, all displacement operators are the elements of ℐ_S. The control space C_S is generated by a group 𝒢_D=,Π, where is an identity operator and Π is a parity operator defined by Πψx=ψ-x or Πφp=φ-p for any wave functions ψx or φp <cit.>. As an action on the displacement operators, we have ΠDαΠ=D-α=D^†α for arbitrary α. The group theoretic average of the generators of the displacement noise operators consequently read 12(x + ΠxΠ)=12(p + ΠpΠ)=0. In other words, 1𝒢_D∑_g_j∈𝒢_Dg^†_jℐ_Sg_j=0 , which is analogous to the special case in the original dynamical decoupling protocol Eq. (<ref>). By this argument, one can have a decoupling scenario analogous to Eq. (<ref>) when one supposes a similar limit to the former expression. We observe that the manipulation operators in Eqs. (<ref>)-(<ref>) are indeed operations associate with elements of 𝒢_S, namely 𝒩_k = {[ ·, even k,; Π·Π, odd k, ]., ℳ_k = {[ Π·Π, even k,; ·, odd k, ]. for k=1,…,n-1, 𝒜_0=Π·Π and 𝒜_n = {[ Π·Π, even n,; ·, odd n, ].; or in other words one can simply set 𝒜_k=Π·Π for k=0,…,n. In this sense, the state at the receiver Eq. (<ref>) simply represents a counterpart of evolved state ρNT_C introduced in Sec. <ref>, where the noise generators G_ωℓ corresponds to time-dependent Hamiltonians, which is more general than the simple scenario in Sec. <ref>. In the same inscription of ρNT_C, one can think of the path parameter ℓ as the time parameter t, the average over noise degrees of freedom 𝔼 as the partial trace over (classical) bath degrees of freedom, and most importantly the elementary noise operator Eq. (<ref>) is analogous to the total time evolution U_totNT_C where the interventions as in Eq. (<ref>) exhibit control mechanism. Here one can set T_C=𝒢_D=2 and n=NT_C=2N for some N. Recall G_ω_k=α_ka-α^*_ka^† for the displacement noise, Eq. (<ref>) can be written as a conjugation of the unitary operator exp(∑_m=1^N∑_g_k∈𝒢_Dg^†_kG_ω_k+2m-1g_k)=exp(N∑_g_k∈𝒢_Dg^†_kG_kg_k) , where G_k=α_ka-α_ka^† with α_k = 1/N∑_m=1^Nα_k+2m-1. For path parameter independent generators, i.e. G_ωℓ=G_ωℓ' for ℓ≠ℓ' or consequently α_k=α_k' for k≠ k', it is clear that the manipulated noise operator equals to an identity channel for any number N, with the help of Eq. (<ref>). Namely, for path-independent displacement noise, the noise can be decoupled and suppressed completely by only one intervention in the middle of the communication path. This resembles the special case Eq. (<ref>) as claimed [Remark that the limit in Eq. (<ref>) arises from perturbation formalism, while in our case Eq. (<ref>) automatically holds by the additive property of the displacement operators.]. For path-dependent generators, although the complete suppression of the noise might not always be the case one can still find situations for which the suppression holds. For instance, assume that the noise is ergodic and 𝔼α_k=𝔼α for all k (e.g. stationary), within the limit N→∞, one can claim that the time steps average of the noise parameters Eq. (<ref>) is identical to configuration average 𝔼α <cit.>. In other words, one can say that lim_n→∞∏_k=1^n𝒜_k∘𝒟α_k=ℐ for 𝒜_k is given in Eq. (<ref>) and when the noise is ergodic and stationary. Finally, we note that the similar analysis above can also be extended to a more generic case with general interventions Eq. (<ref>). In particular, by setting T_C=G for some control algebra such that the relation Eq. (<ref>) holds and setting n=NT_C=NG, the manipulated noise operation in such expression can be described as a conjugation of 𝒯_λexp(N∑_g_k∈𝒢_Dg^†_kG_ω_kg_k) , where G_ω_k = 1/N∑_m=1^NG_ω_k. Here, by assuming that the noise is ergodic and ∫_Ω dμ̃G_ω_k=∫_Ω dμ̃G_ω for all k (e.g. stationary), it follows that lim_N→∞G_ω_k=∫_Ω dμ̃G_ω, and by the vanishing of average noise operator Eq. (<ref>), one could expect that lim_n→∞(∏_k=1^n𝒜_k∘𝒰_ω_k)∘𝒜_0=ℐ. The next section will discuss this scenario, as well as its related spectroscopy filters, for the squeezing noise and for the combined noise introduced in Sec. <ref>. § NOISE DECOUPLING PROTOCOLS FOR SQUEEZING NOISE AND COMBINE NOISE We have demonstrated the implementation of the DD protocol for the case of displacement noise. We will show that a similar picture can be obtained for the squeeze noise with real parameters. We further illustrate that, under the condition that the control operations can be inserted arbitrarily fast and the noise is stationary, one can also obtain a complete decoupling for the generic squeeze noise and the noise generated by any quadratic polynomial in creation and annihilation operators. Lastly, we give an overview of a possible extension of the protocol to the case for the noise operators generated by a polynomial in creation and annihilation operators of a degree higher than two. §.§ Manipulation on Squeezing Noise with Real Positive Squeezing Parameters Now let us consider the cases when there only squeezing noise, where we propose a control set inspired by the noise decoupling perspective and then discuss the manipulation in Wigner representation including filtering mechanism and noise spectroscopy. We begin with the average of noise space ℐ_S, which is, in this case, generated by a^2 and a^†^2. Now we consider 𝒢_S=,R_π/2=e^-iπaa^†/2 as a control set [In principle, to make a clear comparison to the noise decoupling scheme, one should consider a cyclic group ,R_π/2,R_3π/2,R_π. Although the set 𝒢_S is not a cyclic group, it suffices to consider this set for the control of squeeze operators since the squeeze operators have two-fold symmetry, making the action of R^2_π/2 similar to that of , and R^3_π/2 to R_π/2, respectively.]. Clearly, the average of the noise space with respect to the set 𝒢_S is zero, i.e. 1𝒢_S∑_g_j∈𝒢_Dg^†_jℐ_Sg_j=0 , since for a real number ω, e^iωaa^†ae^-iωaa^†=e^iωa; or 12(a^2 + R^†_π/2a^2R_π/2)=12(a^†^2 + R^†_π/2a^†^2R_π/2)=0. In other words, the control set 𝒢_S can be used to modify squeezing noise, and a similar decoupling scheme within the same spirit can be formulated as in the displacement noise. From the operation picture, similar to the action of Π, which reverses displacement operators, the rotation R_π/2 does reverse squeeze operators, i.e., R^†_π/2SzR_π/2=S-z=S^†z, regardless of the parameter z. We will use this identity to construct a filter similar to the case of displacement noise. To do so, for the squeezing noise with manipulation 𝒜_n∘𝒮z_n∘𝒜_n-1∘𝒮z_n-1∘⋯∘𝒜_1∘𝒮z_1∘𝒜_0, one can set 𝒜_k ={[ R_π/2·R^†_π/2, even n,; R^†_π/2·R_π/2, odd n, ]., for k=1,…,n, and 𝒜_0=R_π/2·R^†_π/2. Specifically, in parallel with the displacement noise, we define ℳ_k and 𝒩_k via 𝒜_k=ℳ_k∘𝒩_k for 2≤ k≤ n-1, and choose 𝒩_k = {[ ·, even n,; R^†_π/2·R_π/2, odd n, ]., ℳ_k = {[ R_π/2·R^†_π/2, even n,; ·, odd n, ]. for k=1,…,n-1, 𝒜_0 and 𝒜_n are defined as above. Finally, we arrive at 𝒮s_nz_n∘𝒮s_n-1z_n-1∘⋯∘𝒮s_1z_1, where s_k=-1^k for k=1,…,n. However, unlike the displacement noise, without further assumption, the Eq. (<ref>) cannot be combined trivially into a squeeze operation with a parameter comprising the parts. Hence from now, we focus on real positive parameters z_k=γ_k>0, and later will consider an approximation for general squeezing parameters within an appropriate limit. We write 𝒮s_nγ_n∘𝒮s_n-1γ_n-1∘⋯∘𝒮s_1γ_1=𝒮(∑_ks_kγ_k). Clearly, for the path-independent noise, only one intervention or n=2 suffices to decouple the noise from the state at the receiver. For the path-dependent noise, without limiting procedure, it leads to a convolution between the target state and a filter function constituted by noise operations and our modification operations. Recall that T_𝒮(∑_kγ_k)W_ρα_0 = W_ρ(e^-Γ_nx_0,e^Γ_np_0) , where Γ_n=∑_ks_kγ_k. To obtain the filter in this case one may write 𝔼[   W_ρ(e^-Γ_nx_0,e^Γ_np_0)] = 12π[f_n∗ W_ρ]x_0,p_0, where for x≠ 0 and p≠ 0, f_nx,p = ∫_-∞^∞∫_-∞^∞ e^-irlnx+tlnp𝔼[e^iΓ_nr-t] drdt, otherwise, we write f_nx,p = {[ ∫_-∞^∞ e^-itlnp𝔼[e^iΓ_nr] dt, x= 0  and p≠ 0,; ∫_-∞^∞ e^-irlnx𝔼[e^-iΓ_nt] dr, x≠ 0  and p= 0,; 1, x=p=0. ]. The form of the filter function here is obtained simply by changing variables from x,p to u,v=lnx,lnp for x≠ 0 and p≠ 0, following by performing Fourier transform with respect to new variables, and using the same analysis as in the case of displacement noise. We skip the detailed analysis of the filter functions and revisit the cases of non-additive noise. §.§ Manipulation on Squeezing Noise with Complex Squeezing Parameters For squeezing noise with complex parameters, as previously discussed, one can no longer employ the additivity of the actions on phase space in the case of positive real parameters. To overcome this issue one needs to consider large n limit, or equivalently when the quantity δℓ=max_1≤ k≤ nℓ_k-ℓ_k-1 approaches zero. With this one can sloppily suppose a Trotter-like limit for Eq. (<ref>) as lim_n→∞[∏_k=1^n𝒮s_kz_k]=lim_n→∞𝒮(∑_k=1^ns_kz_k). Following from this one can derive a filter for the noise similar to the case of positive squeezing parameters. In general, the relation Eq. (<ref>) may not hold since the operators involved in the actions are unbounded. However, the range of the parameters is controlled by the distribution μ̃, which is usually supposed to concentrate in some relevant region, and provides a reasonable approximation for the relation Eq. (<ref>). Roughly speaking, by setting R=max_1≤ k≤ nz_k, the offset concerning Eq. (<ref>) is controlled by 𝒪{exp[R^2δℓ^2]}. The contribution from large R^2 is suppressed by the small mass μ̃ around its region while the the contribution δℓ^2 vanishes within the limit. In other words, for squeezing noise with complex parameters, it is possible to choose a set of parameters in order to achieve the filter picture for the noise decoupling scheme via Eq. (<ref>), at least in the sense of approximation. §.§ Displacement and Squeezing Combine Noise and Observations on Non-Gaussian Noise Now let us discuss the condition for the generic noise in our model Eq. (<ref>). First, recall the form of generator Eq. (<ref>) G α_ℓ,z_ℓ = [12z^*_ℓa^2-z_ℓa^†^2+α_ℓa^† - α^*_ℓa] . It can be seen that the noise space ℐ_S is generated by a,a^†,a^2,a^†^2. We define 𝒢 = ,R_π/2,R_π,R_3π/2, leading to 1𝒢∑_g_j∈𝒢g^†_jℐ_Sg_j=0 , and hence 𝒢 can be termed as a control group for the generic noise in our model. Note that control operators in the cases of displacement noise and squeezing noise are also generated by 𝒢. By employing the control operators from the group 𝒢 and using the limiting procedure similar to Eq. (<ref>), one will be able to formulate a similar analysis as in the displacement noise, e.g. noise decouple picture Eq. (<ref>) or the filter for noise spectroscopy Eq. (<ref>). Lastly, let us consider a possible extension of the noise beyond the second order in creation and annihilation operators—non-Gaussian noise. For Gaussian noise (our model), one can see that the control space is simply generated by a cyclic group of rotation with fundamental angle π2. Now we show that one can construct a control group for non-Gaussian noise also from this principle. In particular, let us consider a noise operator Eq. (<ref>) with the replacement of the elementary generators by Gb_m,…,b_1= ∑_k=1^m[b_kℓa^k+b^*_kℓa^†^k] , a degree m polynomial in the creation and annihilation operators with (random) complex coefficients b_kℓ and b^*_kℓ for k=1,…,m. The noise space in this case is generated by a^k,a^†^k_k=1^m. Now we set g_1=R_π/m=e^-iπaa^†/m and the group 𝒢=g_1^j=R_jπ/m_j=0^2m-1, generated by g_1. Since g^†_1ag_1=e^iπ/ma, it follows that ∑_j=0^2m-1g^†_1^ja^pg_1^j = (1-e^2ipπ1-e^ipπ/m)a^p=0 for p≤ m. Hence, we will have 1𝒢∑_g_j∈𝒢g^†_jℐ_Sg_j=0 , or the group 𝒢 here can be considered as a control group for the noise with the generators Eq. (<ref>) as claimed. § NUMERICAL EXAMPLES In this section, we test our protocol numerically. We consider a channel with total length |λ| and numerically calculate the dynamics of ρ(ℓ), for ℓ∈ [0,|λ|], under the effect of noise and the external control intervention. Initializing the state in a pure, results in a mixed state ρ(ℓ) due to the effect of the noise. To quantify the effect of the noise, we calculate the fidelity of ρ(ℓ) compare to ρ(0) via F(ℓ): = F(ρ(ℓ), ρ(0)) = (Tr√(√(ρ(ℓ))ρ(0) √(ρ(ℓ))))^2. In the absence of noise, ρ(ℓ)=ρ(0) and the fidelity is 1 for all ℓ, this is the ideal scenario. Because noise reduces fidelity, the aim is to demonstrate that our control protocol results in higher fidelity than the no-control scenario. Without loss of generality and for the sake of simplicity, we choose that the initial state is a vacuum state with density operator ρ(0) = ρ_vac = |0⟩⟨0|; this state represents the ground state of a harmonic oscillator. The Wigner function of ρ_vac is a symmetric Gaussian function with standard deviation σ_0 as W_ρ_vacx,p = 1πe^-x^2+p^2/2σ_0^2. In the following subsections, we will look at two forms of noise: displacement noise and squeezing noise. And we quantitatively examine our control protocol in these instances. §.§ Displacement Noise In this subsection, we consider that the noise is only a random displacement noise D(ζ), such that by knowing the state ρ(ℓ), we calculate the state at ℓ + dℓ via ρ(ℓ + dℓ) = D(ζ) ρ(ℓ) D(ζ)^†, where ζ is a random complex number; ζ = ζ_r + i ζ_j. We model the noise by taking ζ_r and ζ_j to be random real values from a Compound Poisson Process (CPP). The use of CPP entails that we assume ζ jumps between different values and is constant between jumps. In terms of Wigner function dynamics, this means that the Wigner function moves in the direction of some ζ for some time and a jump in ζ is associated with changing direction. Moreover, CPP is defined by a rate η that determines how quickly the values of ζ_r and ζ_i change, and a probability distribution function (pdf) from which new ζ's are chosen. In our numerical simulation, we employ a Gaussian pdf with a standard deviation of σ≤σ_0. Furthermore, we choose a slowly varying noise with η≤ 0.2. The effect of such displacement noise on a vacuum state is shown in Figure <ref>, where its sub-figure (a) shows the Wigner function of the vacuum state and the dashed circle is a contour where the Wigner function is at 10 percent of its maximum. Sub-figure (b) shows the Wigner function of average density matrix ρ(|λ|) where the average density matrix is defined as ρ(ℓ) = ∑_ζρ(ℓ). Here, in our numerical calculation, we averaged over 100 noise trajectories. The thin blue line in Figure <ref>(d) shows the fidelity of ρ(ℓ), compare to ρ(0) for 100 noise trajectories and the thick blue line is the average Fidelity. So, the numerical calculation shows a rapid drop in fidelity due to random displacement noise. To suppress the noise, we use our protocol and apply multiple interventions, such that Eq. (<ref>) becomes ρ(ℓ + dℓ) = D(ζ) Aρ(ℓ) A^†D(ζ)^† where A is an identity operator , if we do not apply intervention at ℓ and is Π if an intervention occurs. Figure <ref>(c) depicts the Wigner function of ρ(|λ|) when 50 intervention have been applied. The similarity of the Wigner function with the initial state's Wigner function is apparent. Furthermore, thin green lines in Figure <ref>(d) show F(ℓ) for 100 noise trajectories that involve intervention, and the thick Green line is the average of them. Using our intervention strategy, obviously resulted in improved fidelity. §.§ Squeezing Noise Here, we consider only squeezing noise S(ξ). we write similar equation to Eq. (<ref>) for the effect of the squeezing noise as ρ(ℓ + dℓ) = S(ξ) ρ(ℓ) S(ξ)^†. The controlled state with intervention follows ρ(ℓ + dℓ) = S(ξ) Aρ(ℓ) A^†S(ξ)^† where here A is either identity for the no-intervention step or a π2 rotation when we apply an intervention. Note that the action of this intervention is the same as what has been proposed in Eq. (<ref>) thanks to the symmetry of the initial state in our example. Without loss of generality, we consider that ξ is a random real number. Moreover, similar to displacement noise, we consider CPP for ξ with rate 0.2 and a Gaussian pdf with standard deviation. Figure <ref> shows the result of our numerical calculation for squeezing noise and its suppression using our protocol. Similar to the displacement case, top sub-figures—from left to right—show the Wigner function of the initial state, a final state without intervention, and a final state with the noise suppression protocol. Figure <ref>(d) shows the fidelity F(ℓ) for different noise trajectories without(with) intervention in thin blue(green) lines, and the thick lines are the average fidelity. Similar to displacement noise, our numerical calculation demonstrates that our noise suppression protocol results in improved fidelity. § CONCLUSIONS We considered state transfer problems with continuous variables where the channel by which the state passes is noisy. This problem is inspired by quantum communication by using continuous degrees of freedom in real physical settings, e.g. a photon carrying information via optical fiber or free space communication between satellite and ground stations, where in both cases one can expect a distortion of the signals by the noise from the medium. We modeled such noise as a random mixture of unitary operations. The noisy propagators of the state can be interpreted in a similar fashion as random unitary channels, where in this case we employ the path length as a dynamical parameter. We introduced a noise decoupling protocol based on this analogy, where we suppose that the path between the transmitter node and the receiver node can be intervened, allowing us to insert control operations to modify the effective generator of the propagators in such a way that the effect of the noise vanishes, i.e. to achieve an identity channel. We provided detailed analyses for the noise channels generated by linear and quadratic polynomials of creation and annihilation operators. We observed that the target state can be recovered for the general noise profile when the random variables do not depend on the path length, while for the case with path-dependent but ergodic and stationary noise, such a situation can be achieved for fast interventions. Furthermore, in principle, a similar analysis can be extended to the case of higher-order polynomial generators. We demonstrated our results numerically, where one can see the increase in fidelity improved by the proposed interventions. This suggests a promising technique of noise decoupling protocols to improve the efficiency of communication tasks in real physical settings, e.g. fiber and free space communications, as previously mentioned. § ACKNOWLEDGEMENTS We would like to thank Łukasz Rudniski for valuable discussions and suggestions. FS acknowledges support by the Foundation for Polish Science (IRAP project, ICTQT, contract no. 2018/MAB/5, co-financed by EU within Smart Growth Operational Programme). BT acknowledges support of IWY program at DATA61|CSIRO. apsrev4-1
http://arxiv.org/abs/2307.03143v2
20230706171128
Supernova Limits on Muonic Dark Forces
[ "Claudio Andrea Manzari", "Jorge Martin Camalich", "Jonas Spinner", "Robert Ziegler" ]
hep-ph
[ "hep-ph", "astro-ph.HE" ]
CERN-TH-2023-134,TTP23-023, FR-PHENO-2023-06, P3H-23-042 [email protected] Berkeley Center for Theoretical Physics, Department of Physics, University of California, Berkeley, CA 94720, USA Theoretical Physics Group, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA [email protected] Instituto de Astrofísica de Canarias, C/ Vía Láctea, s/n E38205 - La Laguna, Tenerife, Spain Universidad de La Laguna, Departamento de Astrofísica, La Laguna, Tenerife, Spain CERN, Theoretical Physics Department, CH-1211 Geneva 23, Switzerland [email protected] Institut für Theoretische Teilchenphysik, Karlsruhe Institute of Technology, Karlsruhe, Germany Institut für Theoretische Physik, Universität Heidelberg, Germany [email protected] Institut für Theoretische Teilchenphysik, Karlsruhe Institute of Technology, Karlsruhe, Germany Physikalisches Institut, Albert-Ludwigs-Universität Freiburg, 79104 Freiburg, Germany Proto-neutron stars formed during core-collapse supernovae are hot and dense environments that contain a sizable population of muons. If these interact with new long-lived particles with masses up to roughly 100 MeV, the latter can be produced and escape from the stellar plasma, causing an excessive energy loss constrained by observations of SN 1987A. In this article we calculate the emission of light dark fermions that are coupled to leptons via a new massive vector boson, and determine the resulting constraints on the general parameter space. We apply these limits to the gauged L_μ-L_τ model with dark fermions, and show that the SN 1987A constraints exclude a significant portion of the parameter space targeted by future experiments. We also extend our analysis to generic effective four-fermion operators that couple dark fermions to muons, electrons, or neutrinos. We find that SN 1987A cooling probes a new-physics scale up to ∼7 TeV, which is an order of magnitude larger than current bounds from laboratory experiments. Supernova Limits on Muonic Dark Forces Robert Ziegler 0000-0003-2653-7327 August 1, 2023 ====================================== § INTRODUCTION Understanding the fundamental nature of Dark Matter (DM), which comprises ∼84% of the matter of the Universe <cit.>, has become one of the most pressing problems in contemporary physics (see Ref. <cit.> for a review). A wide class of theoretical models describe DM as new light (sub-GeV) particles, which couple only very weakly to the particles of the Standard Model (SM). The simplest possible interactions between these two sectors have been systematically classified within the so-called portal framework <cit.>, giving rise to several benchmark scenarios for SM interactions with a dark sector that can be tested experimentally (see Refs. <cit.> for reviews). Interestingly, if the dark particles are sufficiently light to be produced in stellar plasmas, then their emission modifies the standard picture of stellar evolution and stringent constraints on SM interactions with the dark sector can be obtained also from astrophysical observations <cit.>. Ordinary stars yield strong constraints on DM coupled to electrons and photons <cit.>, while the extreme temperatures and densities reached in the proto-neutron stars (PNS) formed during core-collapse supernovae (SN) allow one to probe also DM couplings to nucleons <cit.>, pions <cit.>, hyperons <cit.> and muons <cit.>. Most of these analyses focus on direct production and emission of light dark bosons (such as axions or dark photons) from the stellar medium. In this article we aim to study instead the case where these bosons merely serve as massive mediators between dark fermions and SM leptons; i.e. they serve only as a portal to the dark sector, which allows the production of sufficiently light dark fermions in stellar plasmas. Prominent examples of this scenario are gauged lepton flavor models such as U(1)_L_μ - L_τ, which contain a light massive gauge boson <cit.> and a dark sector charged under the corresponding group <cit.>. This type of scenarios has attracted much attention as they can simultaneously address the (g-2)_μ  anomaly <cit.>, provide a DM candidate with the right abundance, and contribute to the effective number of neutrino species, alleviating the Hubble constant tension <cit.>. Although we will present novel constraints on this scenario from SN 1987A later on, we perform our analysis within a more general setup. For definiteness, let us consider a vector mediator Z^' with mass m_Z^' and couplings to SM leptons and dark fermions χ with mass m_χ, L ⊃ Z_ν^'( g_ℓℓγ^νℓ + g_ν_ℓν_ℓγ^ν P_L ν_ℓ + g_χχγ^νχ) , where ℓ = e, μ,τ, and g_ℓ, g_ν_ℓ and g_χ are generic couplings. Assuming that the dark fermions are sufficiently light (m_χ≲ 150), we will show in the following that their production from the PNS in SN 1987A leads to stringent constraints on their couplings to SM leptons. For the benchmark U(1)_L_μ - L_τ model, this excludes large regions of the parameter space targeted by future experiments <cit.>. While our analysis is valid for any mass of the Z^', we can integrate it out and describe its contribution with an effective four-fermion operator, if is much larger than the PNS's temperature and chemical potentials. This leads to significant simplifications in the analysis, and allows us to extend it to derive SN 1987A bounds on completely generic interactions of the dark sector with leptons through heavy portal mediators. In the heavy Z^' limit, our calculations are in fact analogous to the ones necessary to study the SM production of neutrinos from leptons in stellar plasmas, which have received much attention over the past half century, since the seminal works in the 1960's <cit.> (see also Refs. <cit.> for production of light dark fermions from heavy new physics). On the other hand, in case the Z^' is a light and narrow state, the calculation is similar to the on-shell production of massive vector bosons coupled to leptons in the plasma <cit.> (see also Refs. <cit.> for massive axions). In this article we focus on the portal interactions with muons, but we also study neutrinos, which could be naturally linked to muons via SU(2)_L. Electrons however form a highly degenerate and ultra-relativistic plasma in the PNS, which might lead to important medium effects in the electron and photon dispersion relations, requiring the inclusion of other production mechanisms not relevant for muons and neutrinos. With this caveat in mind, our calculations are easily extensible to electrons, generalizing and updating the pioneering work of Ref. <cit.> and improving the results presented in Ref. <cit.>. The rest of the paper is organized as follows. In Sec. <ref> we outline the classical SN argument to constrain new exotic cooling agents using the neutrino flux observed from SN 1987A. Besides describing the general theoretical framework, we specify the SN simulations that we employ in our numerical analysis. In Sec. <ref> we describe and compute the rates of the main emission mechanisms induced by the model in Eq. (<ref>). We focus on extracting the main physical features of the rates using different approximations in the various regimes of the Z^' mass, and on deriving analytical estimates. However, our final results rely on exact numerical computations whose details are deferred to Appendices. In Sec. <ref>, then, we implement these calculations of the rates in the SN simulations and derive the constraints on the parameter space of the Z^' model in Eq. (<ref>). We also generalize this analysis in terms of effective operators in the heavy Z^' limit, and to one particular realization of the model arising from a L_μ-L_τ gauged symmetry. Finally, in Sec. <ref> we summarize the results of our paper and close with a brief outlook. § SUPERNOVA COOLING In the dense and hot environment within proto-neutron stars <cit.> neutrinos become trapped and a thermal population of muons is predicted to arise <cit.>. New light dark particles that couple to leptons, e.g. via the interactions in Eq. (<ref>), can be produced efficiently in the stellar plasma, leading to a significant loss of energy if they can escape from the PNS. The corresponding dark luminosity L_χ is then subject to the classical bound L_χ≲ L_ν at 1s post-bounce, where L_ν is the neutrino luminosity <cit.>. This limit is obtained from the observation of a neutrino pulse over ∼ 10s <cit.> during SN 1987A <cit.>, which is in accordance with the predictions of the standard theory of core-collapse SN (see Refs. <cit.> for a critical reappraisal of this limit[It has been also recently noted in Ref. <cit.> that there is a coherent disagreement between the results of state-of-the-art simulations and the observed neutrino signal of SN 1987A during the first second.]). Here, we apply this argument to scenarios where light dark fermions couple to leptons with interactions such as those in Eq. (<ref>). One needs to distinguish two regimes based on the mean free path (MFP) of the dark fermions in the plasma or, equivalently, the strength of the portal interactions. If the dark particles are very weakly coupled (or the MFP is much larger than the radius of the PNS) then they free stream out from the SN once produced, whereas for large couplings (or MFP much shorter than the radius of the PNS) they thermalize with the medium and get trapped inside of the PNS. In the free-streaming regime the general expression for the total energy-loss rate per unit volume, Q, for a given emission process is Q= ∫[∏_init, id^3p_i/(2π)^3 2E_i f_i ] [ ∏_final, jd^3p_j/(2π)^3 2E_j (1± f_j)] × (2π)^4 δ^4 (∑_i p_i-∑_j p_j) ∑_ spins|ℳ|^2 E_χ . These are thermal integrals over the phase space of all the initial- and final-state particles weighted by their number density distributions f_i and the Pauli blocking or Bose enhancement factors (1∓ f_j), respectively. Furthermore, |ℳ|^2 is the the squared matrix element of the given production process and E_χ is the total energy carried away by the dark particles. In the calculations of the free-streaming regime, one conventionally uses f_χ =0 for the new particles, because their occupation numbers inside the PNS are very low and not thermalized by assumption. In the trapping regime, on the other hand, the dark-sector particles are in thermal equilibrium with the plasma and they are emitted from a surface with radius r_χ (dark sphere) following a law analogous to the one of the black body radiation, L_χ^ trap = 𝔤_χ/π r_χ^2 T_χ^4 ∫_x_m^∞ dx x^2 √(x^2 - x_m^2)/e^x + 1 , where 𝔤_χ is the number of degrees of freedom of the χ particle (𝔤_χ=2 for massive dark fermions), x_m = m_χ /T_χ and T_χ = T( r_χ) is the temperature of the dark sphere. The radius r_χ is defined, as is conventional in astrophysics <cit.>, through the optical depth τ_χ (r), by requiring τ_χ (r_χ) = ∫_r_χ^∞dr/λ(r)= 2/3 , where λ(r) is a suitable spectral average of the dark fermion's MFP at a radius r. In this work, we use a “naive” thermal average λ (r) = ⟨λ (r, p_χ)⟩_χ≡𝔤_χ/n_χ (r)∫d^3 p_χ/(2 π)^3λ (r, p_χ)/e^E_χ/T(r) + 1 , for computational simplicity [We have checked that other averages, such as the conventional Rosseland MFP <cit.>, give very similar results.] (see Appendix <ref> for our definitions of thermal averages). The energy-dependent MFP λ (r, p_χ) is related to the total rate of interaction of a dark-sector particle in the medium, Γ_χ=v_χ/λ (r, p_χ), through its velocity v_χ=p_χ/E_χ. The contribution to Γ_χ of a given process with a bunch of target particles b colliding with χ in the initial state is defined through 𝒞_ abs^b = 𝔤_χ∫d^3p_χ/(2π)^3 f_χΓ_χ^b . The quantity 𝒞_ abs is the collision operator describing the absorption rate per unit volume of the medium 𝒞_ abs^b= ∫[∏_init, id^3p_i/(2π)^3 2E_i f_i ] [ ∏_final, jd^3p_j/(2π)^3 2E_j (1± f_j)] × (2π)^4 δ^4 (∑_i p_i-∑_j p_j) ∑_ spins|ℳ|^2] , which uses the same definitions as in Eq. (<ref>), except for |ℳ|^2 which is now the squared matrix element of the given absorption process[Denoting Γ_ours for our definition, the standard approach in the literature <cit.> is to work in terms of emission rates Γ_E = 𝔤_χ f_χΓ_ours. However, one then has to use Boltzmann-equation arguments to argue which rate has to be used for absorptive processes, leading to the reduced absorption rate Γ_A. The definition Γ_ours is chosen such that Γ_ours=Γ_χ^b = n_b ⟨σ v⟩_b, which naturally leads to Γ_ours = Γ_A (see Appendix <ref>).]. For the numerical analyses of this paper we use SN simulations including muons presented in Ref. <cit.> and whose radial profiles for the relevant quantities are reported in <cit.>. Our fiducial results are obtained using the simulation labeled as SFHo-18.80, which reaches the lowest temperatures and, therefore, will lead to the most conservative limits on the dark luminosity (at 1 s post-bounce). The upper bound is set by the neutrino luminosity calculated within the same simulation, which for SFHo-18.80 is given by[We use the total co-moving neutrino luminosities reported in the simulations at the radius of the neutrino-sphere ∼16 km. We thank R. Bollig and H-. T. Janka for facilitating us the necessary data to make these estimates. See also Ref. <cit.>.] L_χ≤ L_ν=5.7×10^52 erg s^-1 . For a rough estimate of the systematic uncertainties related to SN modeling, we will also show the more stringent limits obtained from using the hotter SFHo-20.0 simulation, which gives L_χ≤ 1.0×10^53 erg s^-1. In the free-streaming regime, the dark luminosity is obtained as a volume integral of Eq. (<ref>), L_χ=∫ Q dV, while in the trapping regime we use Eq. (<ref>). Finally, it will be useful to estimate the contributions to the dark luminosity of the different processes to understand their relative importance. For this, we define “typical PNS conditions” as those at 1 s post-bounce and at a radius ≈ 10 km. This region dominates the volume emission in L_χ and is representative of the bounds in the free-streaming regime. Using the simulation SFHo-18.80 <cit.>, this approximately corresponds to: =1.4pt1.5[ Typical PNS conditions; T = 30 , ρ=2×10^14g cm^-3,; μ_μ =100 , μ_ν_e =20 ,; μ_ν_μ =-10 , μ_e = 130 ,; Y_μ=0.026, Y_e=0.12. ] Here T denotes the temperature, ρ the density, μ_l the chemical potential of the lepton l and Y_ℓ is the number density fraction of the charged lepton ℓ relative to the one of baryons. For the Y_ℓ we quote the results derived from the rounded temperature and chemical potentials in Eq. (<ref>) and, therefore, they are slightly different to those reported in <cit.>. Let us stress again that Eq. (<ref>) will be only used for numerical estimates, while our final results and constraints on the models will be obtained using the full radial profiles of all relevant thermodynamical quantities. § PRODUCTION AND ABSORPTION RATES There are two main production mechanisms of χχ̅ pairs from muons and neutrinos in SN (see top and bottom panel of Fig. <ref>): (i) Annihilation μ^-μ^+→χχ̅ and ν_ℓν̅_ℓ→χχ̅; (ii) photoproduction γμ^-→μ^-χχ̅. We do not consider bremsstrahlung processes, μ^-p→μ^- pχχ̅, because they are suppressed with respect to photo-production (or semi-Compton production) of (pseudo)scalars and Z^' <cit.>. Note also that this is not generally true for production from electrons because they are ultra-relativistic and form a highly degenerate system that suppresses photoproduction compared to bremsstrahlung and annihilation <cit.>. Moreover, there are important plasma effects which, for example, dress the electron with an effective mass m_e^*∼ 10 MeV and give rise to pseudo-particle excitations that need to be taken into account in a realistic analysis (see e.g. Ref. <cit.> for the emission of massive axions from electrons in SN). Nonetheless, the calculations we present in this work can be easily extended to electrons and compared with previous literature where all these effects have been neglected <cit.>. We will estimate some of them below in Sec. <ref>. In case of absorption, there are the inverse processes μ^-χχ̅→γμ^- and χχ̅→μ^-μ^+, whose rates are related by detailed balance to those of the photoproduction and annihilation production, respectively, provided that the χ and χ̅ particles reach thermal equilibrium. In addition, other scattering processes may contribute to the diffusion and energy transport in the trapping regime, such as χμ^-→χμ^- and χν_ℓ→χν_ℓ (see middle panel of Fig. <ref>), or processes in the dark sector such as χχ̅→χχ̅. In Appendices <ref> and <ref> we provide the cross sections for all relevant 2 → 2 and 2 → 3 processes needed to calculate the energy-loss and absorption rates. In the following, we discuss in detail the contributions of the annihilation and photoproduction topologies. §.§ Annihilation The energy-loss rate per unit volume in Eq. (<ref>) for μ^-μ^+→χχ̅ annihilation can be simplified to Q =𝔤_μ^2/16 π^4∫_m_μ^∞ d E_+∫_m_μ^∞ d E_- ( E_+ + E_- ) p_+ p_- f_+ f_- I_s . In this equation, 𝔤_μ are the muon's spin degrees of freedom, E_-(+) denote the muon (anti-muon) energy in the PNS rest frame, p_± are the absolute values of their 3-momenta, p_±≡√(E_±^2 - m_μ^2), and f_±≡ f_± (E_±)= 1/(e^(E_±±μ)/T+1) are their Fermi-Dirac distributions in the medium. The function I_s is a (dimensionless) angular integral over the annihilation cross section σ (s) = σ (μ^+ μ^- →χχ) I_s = ∫ d cosθ s √(1 -4m_μ^2/s) σ(s) , which depends on the colliding angle θ between the muon and anti-muon through the Mandelstam variable s in the PNS frame, s = 2 ( m_μ^2 + E_+ E_- -p_+ p_- cosθ). The cross section is physical only above the 2-particle threshold which imposes the kinematic constraint s≥ 4 max (m_χ^2, m_μ^2) in the angular integral. The annihilation cross section in Eq. (<ref>) for the Z^' model in Eq. (<ref>) is σ(s) = g_μ^2 g_χ^2 /3𝔤_μ^2 πs/(s-m_Z^'^2)^2 + m_Z^'^2 Γ_Z^'^2 β_χ(s)/β_μ(s)κ_μ(s) κ_χ(s) , where we have introduced β_i(s)=√(1-4m_i^2/s), κ_i(s)= 1 + 2 m_i^2/s and the total Z^' width Γ_Z^' = m_Z^'/12 π∑_i g_i^2 κ_i(m_Z^'^2)β_i(m_Z^'^2)θ(m_Z^'-2m_i) , and where θ(x) is the Heavyside step function. Note that the average energy carried away by the dark fermions in the annihilation process is equal to the thermally averaged center-of-mass (CM) energy of the leptons, ℰ_l ≡√(⟨ s ⟩_l l̅). For typical conditions in the PNS, Eq. (<ref>), ℰ_μ∼ 280, which sets the scale for 2 m_χ above which the production of χ's in the plasma will become exponentially (“Boltzmann") suppressed by the distribution functions f_±. In addition, this energy scale allows one to define three regimes of m_Z^' in Eq. (<ref>) depending on which term dominates the denominator: (i) A “heavy regime" in which m_Z^'≳ 1 ≫ℰ_μ, so that the Z^' is too heavy to be produced on-shell; (ii) the “resonant regime" where the Z^' can be produced on-shell, 2 m_μ≤ m_Z^'≲ℰ_μ; and (iii) the “light regime” with Z^' masses below the two-muon threshold, m_Z^' < 2 m_μ, so that the Z^' is too light to be produced on-shell. Analogous expressions can be defined for electrons and neutrinos by replacing the couplings and masses accordingly (notice that for neutrinos 𝔤_ν_ℓ=1). Moreover, the average CM energies in these cases are ℰ_e ∼ 160 and ℰ_ν∼ 130, and analogous regimes to those for the muons can be formulated for neutrino-antineutrino and electron-positron annihilation. The light regime in these cases is restricted to extremely small Z^' masses, making it irrelevant for the range of vector boson masses we are considering here. The demarcation of these regimes is useful because one can use approximations to derive analytic results and isolate the main physical factors in control. In the following, we discuss these approximations and describe their contributions to the absorption rate Γ_χ. §.§.§ Heavy regime In this case, the denominator in the propagator (see Eq. (<ref>)) is dominated by the Z^' mass. Expanding in powers of s/m_Z^'^2 up to leading order, the cross section can be easily integrated analytically, giving a function I_s (E_+, E_-, m_χ, m_μ) proportional to the effective coupling g_χ^2 g_μ^2/^4. We can further approximate this expression by taking the high-energy limit m_χ→0 and m_μ→ 0, obtaining[In the following discussion we fix 𝔤_μ=2 with the understanding that some intermediate formulas change by factors of 2 for the neutrino case.] I_s^ heavy (E_+, E_-, 0, 0) = 8 g_χ^2 g_μ^2/9 π E_+^2 E_-^2 /^4 . Also setting m_μ→ 0 in the integrals in Eq. (<ref>), the integrations can be carried out analytically, giving Q^ heavy = 2 g_χ^2 g_μ^2/9 π^5 T^9 /^4[ H_4(y)H_3(-y)+(y → -y) ] . Here we have re-scaled the chemical potential y=μ/T and introduced the functions H_n(y)=∫_0^∞ dxx^n/e^x-y+1=- n! Li_n+1(-e^y) , where Li_n+1(z) is the polylogarithm of order n+1. If we also take vanishing chemical potentials, we recover the results in Ref. <cit.> Q^ heavy_0 = 4 g_χ^2 g_μ^2/9 π^5 T^9/m_Z^'^4F_4 F_3 , in terms of the Riemann ζ-function F_n=H_n(0)=n! (1-2^-n) ζ(n+1) , with F_4 F_3 ≈ 133. We have also included a subindex in Q to indicate that this is a zeroth-order approximation neglecting masses and chemical potentials of the leptons. In order to assess the accuracy of the above approximations, we compare Q in Eq. (<ref>) for massless dark fermions and the cross section in the heavy Z^' limit with Eq. (<ref>) for different SM particles at the typical conditions of PNS in Eq. (<ref>). For muons one finds Q_μ/Q_0 ≈ 0.33 while for neutrinos and electrons one finds Q_ν/Q_0 ≈ 0.99 and Q_e/Q_0 ≈ 0.54 (using the physical electron mass in vacuum), respectively. The thermal suppression of the muon population is mild for these conditions in the PNS, Y_e≃4 Y_μ. The positron abundance is also suppressed by the large electronic chemical potential and, hence, for the same couplings to electrons and muons one obtains similar rates. With these approximations one can estimate the parametric dependence of the energy loss rate per unit mass (i.e. the emissivity) produced by lepton annihilation in the heavy regime as ϵ_ ann^ heavy = ϵ_ max( T/30 )^9 (√(g_χ g_l) 4.1 /)^4 , where we have divided Eq. (<ref>) by the density in Eq. (<ref>), and ϵ_ max=2.1×10^19 erg s^-1 g^-1 has been estimated dividing L_χ in (<ref>) by the total mass of the PNS in this simulation M_ PNS=1.351 M_⊙. §.§.§ Resonant regime If the Z^' can be produced on-shell, then the denominator in Eq. (<ref>) is dominated by the Z' width Γ_Z^', and it can be replaced by π/(Γ_Z^') δ (s-^2) in the narrow width approximation. The δ-function can be used to perform the angular integration in Eq. (<ref>) and for m_χ→ 0 this gives (neglecting terms of relative size 2 m_μ^2/^2) I_s^ res = g_χ^2 g_μ^2/24 ^3/Γ_Z^' E_+ E_- . The energy integrations of Eq. (<ref>) can be well-approximated by neglecting the chemical potentials, but keeping a non-zero muon mass in the integrand, giving Q^ res_μ =0 = g_μ^2 BR_χ/4 π^3 ^2 T^2 m_μ e^-2 m_μ/T , where BR_χ≡ BR (Z^'→χχ) denotes the invisible Z^' branching ratio, and numerically m_μ/T e^-2 m_μ/T≈ 0.004. For muon annihilation this indeed yields a good approximation, with Q_μ/Q_μ=0≈ 0.95, while for electrons one can set m_e → 0 in the integrals, giving a result similar to Eq. (<ref>), which can be further approximated by[For y ≫ 1, one has H_n(y) ≈ y^n+1/(n+1), H_n(-y) ≈ e^-y n!. ] Q^ res_m =0 = g_μ^2 BR_χ/16 π^3 ^2 T μ_e^2 e^-μ_e/T , and numerically μ_e^2/4T^2 e^- μ_e/T≈ 0.06, resulting in Q_e/Q_m=0≈ 1.6. Finally neglecting lepton masses and chemical potential simplifies to Q^ res_0 = g_μ^2 BR_χ/4 π^3 m_Z^'^2T^3 F_1 F_0 , where numerically F_1 F_0 ≈ 0.57. This is a good approximation for neutrino annihilation with Q_ν/Q_0 ≈ 0.95, while energy-loss rates for electrons (muons) are smaller by a factor 10 (100). Importantly, the contribution to the energy-loss rates in the resonant regime scales perturbatively with the couplings as ∼𝒪(g^2) instead of ∼𝒪(g^4) in the heavy or light regimes. In fact, for BR_χ=1, one should recover the results obtained for the coalescence Z^' production mechanisms in Ref. <cit.>[Indeed, using the approximation f_+ (E_+) f_- (E_-) ≈ f_Z^' (E_+ + E_-), we reproduce their expression for the Z^' production rate in the light Z^' and for massless leptons and χ, up to a factor 2/3. See also footnote 3 in Ref. <cit.> for a discussion of the possible origin of these discrepancies.]. Also, the various rates in Eqs. (<ref>), (<ref>) and (<ref>) all scale quadratically with the Z^' mass. From these expressions one can readily obtain the emissivities, which for neutrino annihilation read ϵ_ ann,ν^ res = ϵ_ max( T/30 )^3(g_ν_ℓ/10^-9 m_Z^'/10 )^2 BR_χ , where we have used the same approximations as in Sec <ref> which are valid up to m_Z^'∼200 MeV. As discussed above, emissivities for electrons and muons are expected to be smaller by a factor 10 and 100, respectively. §.§.§ Light regime In this case the denominator of the propagator is dominated by s, and the cross section can be integrated analytically. For massless χ and muons one obtains I_s^ light = g_χ^2 g_μ^2/6 π , which is independent of E_+, E_-. Similarly as in the resonant regime, the energy integrations of Eq. (<ref>) can be approximated by neglecting the chemical potentials, but keeping a non-zero lepton mass in the integrand Q^ light_μ =0 = g_χ^2 g_μ^2/12 π^5 T^2 m_μ^3 e^-2 m_μ/T , where m_μ^3/T^3 e^-2 m_μ/T≈ 0.04. For muon annihilation this indeed yields a good approximation, with Q_μ/Q_μ=0≈ 0.85, while for electrons one can set m_e → 0 in the integrals (but keep non-zero chemical potentials), giving a result similar to Eq. (<ref>) Q^ light_m_ℓ =0 = g_χ^2 g_μ^2 /72 π^5 T^2 μ_e^3 e^-μ_e/T , and numerically μ_e^3/6T^3 e^- μ_e/T≈ 0.2, resulting in Q_e/Q_m=0≈ 1.2. Finally neglecting lepton masses and chemical potential simplifies to Q^ light_0 = g_χ^2 g_μ^2/12 π^3 T^5 F_2 F_1 , where numerically F_2 F_1 ≈ 1.5. This is a good approximation for neutrino annihilation with Q_ν/Q_0 ≈ 0.97, while for electrons (muons) the energy-loss rates and emissivities are smaller by a factor 10 (100) than predicted by this formula. Nevertheless, using Eq. (<ref>) we obtain the emissivity ϵ_ ann^ light = ϵ_ max( T/30 )^5 ( √(g_χ g_μ)/3 × 10^-5)^4 . §.§.§ Annihilation and scattering contributions to trapping Given the scattering of a χ with another particle b, the absorption rate can be approximated by (see Appendix <ref>) Γ_χ^b≈(∏_i F_ deg,i)𝔤_b ∫d^3 p_b/2π^3f_bσ(s) v , where 𝔤_b are the number of degrees of freedom of the particle b (𝔤_b=2 for b=χ̅,μ,e and 𝔤_b=1 for b=ν_ℓ), v=√((p_χ· p_b)^2- m_χ^2 m_b^2) /E_χ E_b is the Møller velocity and σ (s) is the scattering cross section for χ + b → X_1 + + X_n <cit.>. Moreover, the index i runs over the final-state particles and we have approximated the effect of the Pauli blocking by its thermal average or degeneracy factors <cit.> F_deg,i = ⟨ 1-f_i ⟩_i = 𝔤_i/n_i∫d^3p_i/(2π)^3 f_i(1- f_i) , where 𝔤_i and n_i denote their degrees of freedom and number densities, respectively. There are two types of processes related by crossing to the annihilation diagram that are relevant for the trapping regime: Inverse annihilation, χ χ→μ^- μ^+ and scattering, χ μ^- →χ μ^- and χ̅ μ^- →χ̅ μ^-. Scattering processes are kinematically more involved as they exchange a Z^' in the t-channel. For very light Z^''s (m_Z^'≪ T), they have a differential cross section with a Coulombian enhancement in the forward direction which involves a small momentum transfer and, therefore, little contribution to the thermalization rate between the dark and SM sectors. The interplay between the contributions of inverse annihilation and scattering to the absorption rate is similar to the case of heavy-lepton neutrinos in SN <cit.>, as recently emphasized in Ref. <cit.>. In the absence of self-interactions between the dark fermions, these two processes really define two surfaces that determine different properties of the dark luminosity in the trapping regime. The freeze-out of inverse annihilation first fixes the number flux of χ's. The outgoing flow then thermalizes via scattering processes with the leptons until they decouple at a larger radius. For Z^' masses in the resonant regime, absorption rates will be dominated by the inverse annihilation and the two surfaces coalesce into a single dark sphere that determines the thermal emission of the χ's. In the heavy regime both mechanisms can be important and this distinction must be kept in mind. This is also reminiscent of models with new neutrino self-interactions and their effect in the dynamical evolution of the SN <cit.>. Notably, when considering the model in Eq. (<ref>) with neutrino interactions, processes such as ν_ℓν̅_ℓ→ν_ℓν̅_ℓ also occur. These effects may lead to a fundamentally different incarnation of the SN 1987A cooling limit, valid in the trapping regime <cit.>, which is however beyond the scope of the classical SN 1987A cooling bound that we apply in our analysis. With all this in mind, our fiducial analysis in the trapping regime includes both processes, inverse annihilation and scattering, in the calculation of the MFP to obtain one single dark sphere that determines L_χ. However, we repeat the calculations for the case where we do not include the scattering processes. We take the variation of our results as an indicator of the potential systematic uncertainties involved in our treatment of the trapping regime. In addition, dark elastic scattering, χχ̅→χχ̅, may become relevant in the trapping regime, as recently discussed in Refs. <cit.>. In our case, however, this will not play an important role for the calculations of L_χ in the trapping regime. The reason is that dark elastic scattering does not directly contribute to maintaining the population of χ's in thermal equilibrium with the SM plasma after they freeze out (after inverse annihilation turns off). Moreover, for low , the rate of dark elastic scattering is resonant and overwhelmingly larger than scattering or inverse photoproduction (discussed below in Sec. <ref>), that do tend to maintain the thermal equilibrium between the dark and SM sectors. This situation is particularly relevant for muons, whose population drops significantly at the outer layers of the PNS where freeze out of the χ's occurs. Therefore, they would effectively decouple immediately after, except for the contributions to the MFP induced by their interactions with the leptons, which are already accounted for in our fiducial analysis[A proper treatment of the thermalization rates of dark matter with self interactions is, actually, an important issue when studying the structure predicted by these models at the center of galactic halos (see e.g. Ref. <cit.>).]. Nevertheless, we have studied the contributions of dark elastic scattering processes for completeness, and discuss in Sec. <ref> the consequences for our results if these are included at face value in the calculation of Γ_χ and the MFP. §.§ Photoproduction For muons we use two approximations: (1) Describe the Pauli blocking of the final muon by (1-f_i)→ F_ deg,i in Eq. (<ref>) and, (2) in the phase space integrals of the initial particles we take the extreme non-relativistic limit where the muon is static and recoilless. Thus, the kinematics is evaluated in the muon's rest frame, s=m_μ^2+2m_μω, with ω the photon's energy, and E_χ+E_χ̅=ω. One arrives at the conventional formula <cit.> Q_γ^ nr=n_μ F_ deg,μ/π^2∫_ω_0^∞ dωω^3 f_γ σ(s) , where ω_0=(s_0-m_μ^2)/2m_μ and s_0 is the kinematic threshold of the process. While Eq. (<ref>) is approximately valid for muons, electrons are instead ultra-relativistic in the PNS. If we neglect the electron's mass in the phase space integrals[Since the photoproduction cross sections have a kinematic singularity in the limit m_e→ 0, we keep the physical value of the mass in numerical estimates, neglecting also plasma effects.] and assume that E_χ+E_χ̅≈(ω + ω^')/2, then Q_γ^ r =F_ deg,e/8π^4∫^∞_0 dωω f_γ∫^∞_0dω^'ω^' (ω+ω^')f_e ×∫^+1_-1d(cosθ)sσ(s) , where ω^' is the electron energy and s=2ωω^'(1-cosθ). Similarly to the annihilation topologies, one can estimate the average energy available for the production of dark particles in the photoproduction process. For this we use the average CM energy at threshold, ℰ_ℓ^γ = √(⟨ s⟩_ℓℓ̅) - m_ℓ, and one can also define different regimes of for photoproduction: (1) the “heavy regime” where ≳1 ≫ℰ_ℓ^γ; (2) the “resonant regime” with m_Z^'≲ℰ_ℓ^γ, where the Z^' can be produced on-shell. For the conditions specified in Eq. (<ref>) we find ℰ_μ^γ∼ 90 and ℰ_e^γ∼ 150. In Appendix <ref> we provide the results for the photoproduction cross sections along with some details of the calculations. In addition, we have checked the validity of the approximations in Eqs. (<ref>) and (<ref>) by calculating the full 5-body phase space thermal integrals exactly, as described in the seminal work of Ref. <cit.>, where this was done for effective (axial)vector operators. For completeness, we outline this calculation in Appendix <ref>. We find that the approximate formulas provide a very accurate description, within ∼20% (muons) and ∼40% (electrons) of the full photoproduction rates in the resonant regime and neglecting degeneracy of the final state lepton. In the heavy regime, Eq. (<ref>) underestimates the full result for muons by a factor ∼8. Nevertheless, as shown below in Sec. <ref>, the emission rate for heavy is dominated by annihilation by orders of magnitude. On the other hand, for the highly degenerate electrons in the PNS, we find that approximating the Pauli blocking effects by F_ deg,e overestimates the full energy loss rate by a factor ∼3. In the case of muons, the degeneracy factor F_ deg,μ instead agree with the exact result within ∼5% accuracy. In the following, we discuss in more detail the two regimes of photoproduction and the contribution of inverse photoproduction to the interaction rate Γ_χ. §.§.§ Heavy regime The cross section of the process μ^-γ→μ^-χχ̅, induced by the exchange of a heavy Z^', is σ(s) = α g_μ^2 g_χ^2 m_μ^2/1728π^2 m_Z^'^41/ŝ^2 (ŝ-1)^3 ×(-55ŝ^6+682ŝ^5 + 483ŝ^4-968ŝ^3 -169ŝ^2 +30ŝ -3+12ŝ^2( 2ŝ^4 -14ŝ^3 -87ŝ^2 -52ŝ+1) logŝ) , with ŝ=s/m_μ^2, α denotes the fine-structure constant and we have taken for simplicity the massless limit m_χ→0. This cross section is equivalent to the expression obtained by Dicus in <cit.> for the photoproduction of a neutrino pair using only vectorial couplings[In fact, our results agree with Ref. <cit.> but disagrees with Ref. <cit.> which cites Ref. <cit.> with the opposite sign in the term ∝ 120ŝ/(ŝ-1)^2 in Eq. (3.12). We thank Georg Raffelt for confirming this typo. See App. <ref> for more details.]. One might attempt to obtain a more simplified expression of the rate by taking the non-relativistic limit in Eq. (<ref>). However, the cross section in this case, σ(ω)=2α g_μ^2 g_χ^2/105π^2m_μ^2ω^4/m_Z^'^4 , grows rapidly with energy and leads to a gross overestimation of the integral in Eq. (<ref>) (see also Ref. <cit.>). For instance, for the typical PNS conditions used above, one obtains a rate that is larger than the one obtained using the relativistic expression of the cross section by a factor ∼25. Nevertheless we use the non-relativistic approximation together with Eq. (<ref>) to get a rough estimate of the emissivity produced by the photoproduction process in this regime, giving ϵ_γ^ heavy=ϵ_ max(Y_μ/0.025)(T/30 )^8(√(g_χ g_μ) /)^4 . Comparing with the equivalent contribution from annihilation in Eq. (<ref>), we conclude that photoproduction gives an emissivity rate that is smaller by a factor ∼ 250 and thus can be neglected in the heavy regime. §.§.§ Resonant regime The resonant μ^-γ→μ^-Z^'(→χχ̅) cross section is σ(s) =παα_χ/m_μ^2BR_χ/ŝ^2(ŝ-1)^3 ×((ŝ(ŝ(ŝ+7 x^'+15)+2 x^'-1)-x^'+1)R(x^',ŝ) +4ŝ^2(ŝ^2-2ŝ(x^'+3)+2x^'(x^'+1)-3) ×tanh^-1R(x^',ŝ)/ŝ-x^'+1) , where we have introduced the notation α_χ=g_μ^2/4π, and where ŝ=s/m_μ^2, x^'=m_Z^'^2/m_μ^2 and R(x,ŝ) = (ŝ^2 -2ŝ (x+1) +(x-1)^2)^1/2. For BR_χ→1 we recover the cross section for semi-Compton production of massive vector bosons, γμ^-→ Z^'μ^-. In the →0 limit, one obtains σ(s) =παα_χ/m_μ^2BR_χ/ŝ^2(ŝ-1)^3 (ŝ^4+14 ŝ^3-16 ŝ^2+2 ŝ-1 +(2 ŝ^4-12 ŝ^3-6 ŝ^2) logŝ) , and if we now perform the non-relativistic expansion, σ(s)≈8παα_χBR_χ/3m_μ^2 , we recover the Thomson cross section for α_χ→α and BR_χ→1. This is the expression commonly used for the semi-Compton production of vector particles in stellar plasmas <cit.>, but it is less appropriate for the PNS where leptons are relativistic. In fact, in case of muons for =0 and in the typical conditions we have been using for the PNS, we find that the Thomson cross section overestimates the relativistic one by a factor ∼2. On the other hand, the energy-loss rate of the full resonant photoproduction cross section is insensitive to up to ≳ T, at which point it starts dropping due to increased Boltzmann suppression and defines the onset of the heavy regime. Taking as a reference the Thomson cross section, we can estimate the emissivity of the photoproduction in the resonant regime as ϵ_γ^ res=ϵ_ max(Y_μ/0.025)(T/30 )^4(g_μ/5× 10^-10)^2 , using our typical PNS conditions in Eq. (<ref>). Comparing this to the emissivity from μ^+μ^- annihilation, Eq. (<ref>), we observe that the rate of χχ̅ production from muons for light Z^' will be dominated by photoproduction for many orders of magnitude. For ≲ 10 MeV this process is even more important than resonant neutrino-antineutrino annihilation for the case g_ν_ℓ=g_μ, as demonstrated by comparing to Eq. (<ref>). §.§.§ Contribution of inverse photoproduction to trapping Assuming thermal equilibrium, which is adequate in the trapping regime, the contribution of inverse photoproduction ℓ^-χχ̅→ℓ^-γ to Γ_χ can be related to the production rates by means of detailed balance (see Appendix <ref>). In case of muons, the photoproduction rate of χ per unit volume can be calculated using the same approximations as in Eq. (<ref>), 𝒞_ prod,μ^γ=n_μ F_ deg,μ/π^2∫_ω_0^∞ dωω^2 f_γ σ(s) , while for electrons we use 𝒞_ prod,e^γ =F_ deg,e/8π^4∫^∞_0 dωω f_γ∫^∞_0dω^'ω^' f_e ×∫^+1_-1d(cosθ)sσ(s) . Then, we estimate the contribution of photoproduction to the MFP by applying detailed balance, using the inverse of ⟨Γ_χ^γ⟩ (see Appendix <ref>) ⟨Γ_χ^γ⟩_χ = 𝒞_ prod^γ/n_χ . Note that this approximation differs from the direct calculation of the contribution of inverse annihilation and scattering in Eq. (<ref>). In order to combine the two contributions in our estimate of the MFP we use the approximate formula λ = ⟨v_χ/Γ_χ^χ̅ + Γ_χ^l + Γ_χ^γ⟩≈1/⟨ v_χ/(Γ_χ^χ̅+Γ_χ^l)⟩^-1+⟨Γ_χ^γ⟩/⟨ v_χ⟩ , where all the thermal averages are understood to be taken with respect to the χ kinematics (see Appendix <ref>). §.§ Other processes and neglected plasma effects In our analysis, we have selected the processes that are dominant for the production and absorption of χ's in muons and have neglected plasma effects which are expected to be small in this case. Electrons in the PNS, on the other hand, are highly degenerate. Moreover, in the plasma the electron and photon dispersion relations are significantly modified. The electron mass effectively increases while the photon acquires a longitudinal mode and an effective mass that could enable the decay γ̃→χχ̅, where the “plasmon” γ̃ includes these collective plasma modes <cit.>. Nonetheless, the production of χ's in the heavy regime and neglecting the χ mass would be analogous to the SM pair-production of heavy-lepton neutrinos from the electrons in the stellar plasma. Plasmon decay is indeed an important process in the conditions of high densities predicted in the PNS. However, at the high temperatures reached in the SN explosions considered in this work, e^+e^- annihilation becomes the dominant process <cit.>. Adding mass to the χ's will not affect this conclusion and may, in fact, kinematically close the plasmon decay if m_χ≳ 10, which is the scale of the plasma frequencies expected in the PNS. Finally, accounting for the increase of the electron mass in the plasma by a similar amount does not affect significantly the annihilation rates as discussed in Sec. <ref>. On the other hand, for the light Z^' some of the neglected effects can become important. For instance, resonant bremsstrahlung production e^-p→ e^-p (Z^'→χχ̅) could become the dominant process for ∼ 10, as it is the case for on-shell production of axion-like particles <cit.>. In addition, for these masses one needs to consider medium-induced γ̃-Z^' mixing which may also have an impact <cit.>. For all of these reasons, our results for the emission from electrons in the light regime, ≲ 50 MeV, should be considered as an intermediate step towards a more refined and robust calculation, and the results that we report for this case regarded as a rough approximation. § RESULTS In this section, we apply the upper limit on the dark luminosity in Eq. (<ref>) to obtain the SN 1987A constraint on the parameter space of the different dark sector models we consider. We start by presenting an analysis in terms of effective operators, which corresponds to the heavy regime introduced in Sec. <ref> for the calculation of the relevant processes. This allows us to generalize our analysis to generic interactions of dark fermions coupled to leptons via dimension-6 operators, and extract a SN 1987A limit for any portal mediator with mass much larger than the temperatures and chemical potentials in the PNS. It is interesting to note that this approach was already applied in the context of neutrino emission from stellar plasmas in the very early days of the SM <cit.>. We then focus on the constraints on the parameter space of the simplified Z^' model of Eq. (<ref>), which can be regarded as the continuous extension of the EFT bounds to low mediator masses for one particular operator (VV). Finally, we present the SN 1987A constraints for a phenomenologically relevant and UV-motivated version of the Z^' model, obtained by gauging the L_μ-L_τ symmetry. §.§ Effective field theory The analysis for the heavy Z^' can be generalized in the context of an EFT with four-fermion operators. Focusing on the couplings of dark-sector fermions χ to leptons l=e, μ, ν_ℓ, the most general effective Lagrangian at leading order is L_ EFT = 1/Λ_l^2∑_X,Y C_X Y^l(l Γ_X l)·(χ Γ_Y χ) , where X, Y run over V,A,S,P, L,R, T,T', with Γ_V=γ^μ, Γ_A=γ^μγ_5, Γ_S=1, Γ_P=γ_5, Γ_R,L=γ^μ(1±γ_5)/2, Γ_T=σ^μν = i/2 [γ^μ, γ^ν] and Γ_T'=σ^μνγ_5, and Lorentz indices properly contracted[Only two tensor operators are independent and we will take those corresponding to C^l_TT and C^l_T'T.]. Matching the effective operators to the Z^' model in Eq. (<ref>) coupled to charged leptons yields Λ_ℓ = and C_VV^ℓ= g_ℓ g_χ. For neutrinos their bilinears in the EFT Lagrangian are constructed with left-handed fields and contribute, instead, to C_LV^ν_ℓ=g_ν_ℓg_χ. In Table <ref>, we show the limits obtained on Λ_l^ eff≡ (Λ_l/C^l_X Y)^1/2 for these interactions in the limit m_χ=0 for muons, neutrinos and electrons. The upper limits correspond to the free-streaming regime, the lower limits to the trapping regime. Notice that the excluded regions of Λ_l^ eff are in the EFT range of validity as long as Λ_l≳ 1, which is much larger than all other energy scales relevant in the PNS. This is the case for essentially all operators. The SN 1987A bounds on the EFT operators are very strong, reaching up to 4-7 TeV. This sensitivity to the mediator mass scale is approximately one order of magnitude better than the one achieved by laboratory experiments for similar leptonic interactions <cit.> [See also <cit.> for an estimate of the SN 1987A bounds obtained by recasting the results derived in <cit.> for an operator coupled like the electromagnetic current.]. For instance, monophoton searches at LEP have been used to set the following lower bounds on Λ^ eff_e <cit.>, VV, AA: 0.48 TeV , SS: 0.44 TeV . In Fig. <ref>, we show the SN 1987A limits on these EFT operators compared with those obtained at LEP <cit.>, and with the projections of the sensitivity that could be achieved at Belle II <cit.> or at a future e^+e^- linear collider <cit.>. Remarkably, the SN 1987A bounds are stronger than LEP by roughly one order of magnitude, and will even dominate over future collider limits. Note however that SN 1987A constraints apply only to sufficiently light dark fermions, while collider bounds typically extend to larger χ masses, such as LEP, which provides constraints for m_χ≲100 GeV. As discussed in Sec. <ref>, the dominant effect in the heavy regime of the Z^' model is annihilation (for typical SN conditions), and photoproduction can be neglected. We have checked that this is indeed true for any EFT operator by calculating their contributions to photoproduction explicitly (see Appendix <ref>). In fact, we note that, for the EFT limit of the Z^' model, the approximate expression in the high-energy limit (m_ℓ→0) in Eq. (<ref>) leads to a bound on the effective scale that is off only by ∼30% with respect to the full calculation. Therefore, since the high-energy limit gives quite accurate results, the numerical bounds for different Lorentz structures and different leptons are of the same order, as in this limit the corresponding cross sections differ at most by 𝒪(1) numbers. In Fig. <ref> we show the dependence of the limits on the heavy scale as a function of the dark fermion mass m_χ for the effective VV interaction and LV for neutrinos. As expected, the excluded regions shrink with increasing m_χ and they are limited to masses m_χ≲ 300 MeV. We also analyze the variations of the constrained region produced by using the simulation SFHo-20.0 or different prescriptions for the processes included in the trapping regime. The upper limits of Λ_l^ eff obtained from this hotter simulation are a factor ∼2 stronger in the free-streaming regime, because in this case emission is dominated by the hottest region inside the supernova. On the other hand, the uncertainties of the boundary with the trapping regime for electrons and neutrinos are relatively small and our fiducial calculation is, again, on the conservative side. The reason for this behavior is that the dark sphere is not located in the region of highest temperature. This makes trapping sensitive to the shape of the temperature profile, which is similar for both simulations. Moreover, the scale marking the onset of the trapping regime in these cases is Λ_l^ eff≈100 GeV, which is of the order of the electroweak scale. This is consistent with the fact that the boundary of the trapping regime is set by the luminosity of the trapped neutrinos via Eq. (<ref>), which interact with SM leptons precisely through dimension-6 operators suppressed by the Fermi scale. For muons, however, the bound extends to scales that are a few orders of magnitude lower than for electrons and neutrinos. In addition, there is a large variation in the location of the boundary depending on the selection of processes contributing to trapping. The reason is that muons are relatively heavy and there is a maximal radius where they can be produced by thermal fluctuations in the plasma. Putting it differently, there are no muons to scatter with in case of inverse photoproduction and inverse annihilation becomes ineffective because of the strong phase-space suppression. In case that only μ-χ interactions are included in the calculation of the MFP, the radius of the dark sphere is typically smaller than the one of the neutrino sphere and the bounds on Λ_μ^ eff extend down to ∼ 1. However, if one were to also include χχ̅ elastic scattering in the calculation of the MFP, and the χ is light, m_χ≲ T, then they would produce contributions analogous in size to those given by the neutrino and electron interactions and the boundary of the trapping regime would be increased again to ∼ 100, cf. the discussion in Section <ref>. §.§ Simplified Z^' models We now present the SN 1987A constraints in the parameter space of the simplified Z^' model in Eq. (<ref>). In the derivation of our results for the annihilation contributions to Q we numerically solve the integral in Eq. (<ref>) (and the equivalent ones for neutrinos and electrons), as described in Appendix <ref>. This procedure allows us to track the contribution of the annihilation rates across the whole range, including the transition regions between the three regimes defined in Sec. <ref>. Instead, for photoproduction we only use the expressions in the resonant regime, because photoproduction is only relevant for the low- regime of the charged leptons, see Eq. (<ref>). In Fig. <ref> we show the SN 1987A limits for the model in Eq. (<ref>), for the case when only one of the leptonic couplings is present and with g_χ=g_l. In the lower (upper) part of the plots we identify the dominant process for production (absorption) in the indicated mass range. We also show the variation of the constrained region obtained by using the SFHo-20.0 simulation (dotted curve) and, independently, by omitting the scattering contribution to the MFP in the trapping regime (hatched region). In all cases we observe the onset at high masses of the power-law behavior of the constraints ∝1/^4, characteristic of the EFT. Interestingly, this occurs for ≳1 GeV in the free-streaming domain but already for ≳100 MeV in the trapping regime. This is because in the former case emission is governed by the conditions in the hottest region of the PNS, while in the latter it corresponds to the cooler outmost layer of the dark sphere, where all relevant energy scales are smaller. The boundary of the trapping regime changes very little with respect to adding or not scattering contributions in the absorption rate. Adding dark elastic scattering (χχ̅→χχ̅) in the muonic case (for m_χ≲ T) would instead make the trapping region similar to the neutrino case. The shape of the constrained region below ≈1 GeV depends on the lepton considered. For muons the low region is dominated by (resonant) photoproduction, which gives a flat bound up to ≳ T, where the on-shell production of the Z^' starts decreasing due to Boltzmann suppression. Nevertheless, it remains more important than μ^+μ^--annihilation (in the light regime), until it becomes resonant at the μ^+μ^- threshold, ≥2m_μ. However, this occurs already at large energies and the production quickly suffers from the Boltzmannian suppression, converging to the EFT scaling at higher . For electrons, photoproduction dominates again for low , even though the e^+e^- threshold is much lower than for muons. Resonant annihilation of electrons quickly replaces photoproduction as the dominant process above ≳ T in the free-streaming domain, until the resonance starts suffering from Boltzmann suppression and the EFT takes over. For the trapping regime, there is no range of where resonant annihilation dominates and the EFT directly replaces inverse-photoproduction at ≳ 100 MeV. For neutrinos there is no photoproduction and the production and absorption rates are given by annihilation in the resonant and heavy regimes. In the free-streaming regime we observe a strengthening of the bound up to T≲100 MeV. This is due to the ^2 scaling of the emission rate in the resonant regime, see Eq. (<ref>), which is quickly overcome by Boltzmann suppression until the heavy regime takes over. §.§ Gauged L_μ - L_τ models Finally, we study SN 1987A constraints on a UV-motivated realization of the simplified Z^' model in Eq. (<ref>). This is the gauged L_μ - L_τ model coupled to DM fermions, described by the interaction Lagrangian L_ int = Z_μ^'( g_μ - τ j^μ_ SM + g_χχγ^μχ) , where χ is the dark fermion and j^μ_ SM is the SM part of the L_μ - L_τ current. These interactions induce an irreducible contribution to the kinetic mixing of the Z^' with the photon, through μ and τ loops, giving a mixing parameter of the order ϵ∼ g_μ-τ/70 <cit.>. This would presumably give only small corrections to our analysis (see e.g. <cit.>) and thus will be neglected. It is well known that such models can accommodate the present (g-2)_μ anomaly with couplings of order g_μ - τ∼ 10^-4 for a light Z^' gauge boson, m_Z^'≪ m_μ <cit.>. They also allow to reproduce the DM relic abundance through resonant s-channel annihilation, when muon and DM fermion have similar couplings to the Z^' gauge boson and the latter is heavier than the DM fermion by a factor 2-3 <cit.>. For such light masses, Z^' decays and DM annihilation can heat the SM bath after neutrino decoupling, thereby increasing the effective number of relativistic degrees of freedom, usually expressed as an effective number of neutrino species N_ eff. Such a contribution could help to reduce the long-standing tension between local and cosmological determinations of the Hubble constant <cit.>, if the new contributions is of order Δ N_ eff∼ 0.1-0.4 <cit.>. At present, the most relevant laboratory constraints (see Ref. <cit.> for an overview) stem from BaBar searches for Z^'-bosons above 212 MeV decaying to muons <cit.>, neutrino trident production <cit.> at CCFR <cit.>, bounds on coherent elastic neutrino nucleus scattering from the COHERENT collaboration <cit.> and constraints on neutrino-electron scattering <cit.> at BOREXINO <cit.>. A variety of accelerator-based searches have been proposed to explore the unconstrained parameter space, such as NA62 <cit.>, which looks for final state radiation of Z^'-bosons in K^+ →μ^+ ν_μ, and dedicated searches using muon beam facilities such as the NA64μ experiment <cit.> at CERN and M^3 <cit.> at Fermilab. Astrophysical limits from white dwarfs have been studied already in Refs. <cit.>. Here we show that also constraints from SN 1987A disfavor a large region of parameter space with significant overlap with the expected reach of planned experiments and with the region that could address the H_0 tension. In Fig. <ref>, we show the SN 1987A limits on the muon coupling as a function of the Z^' mass, in the scenario where muons and dark matter couple equally to the Z^', g_μ - τ = g_χ, and m_Z^' = 3 m_χ. This is compared to the current bounds discussed above, shown in gray, and the regions preferred at 95% C.L. by (g-2)_μ and H_0. In the free-streaming regime we use the same method as for the simplified model described in Appendix <ref>, including the contributions from μ^+μ^-, ν_μν_μ and ν_τν_τ annihilations. We also add the resonant photoproduction off muons which dominates the rate up to ≲10 MeV, where neutrino annihilation starts to give the largest contribution to the rate, producing the characteristic strengthening of the bound with . At m_Z' = 2 m_μ a small feature signalizes the onset of resonant μ^+μ^- annihilation. Instead, the boundary of the constrained region in the deep trapping regime is dominated by inverse resonant annihilation into neutrinos. Note that the χ mass scales with the Z^' mass, which has the effect of slightly suppressing the rate as compared to the massless-χ case, cf. Fig. <ref>. A rough estimate of the uncertainty of the excluded region is indicated with a dashed orange line, which shows the limits obtained from employing the hottest SN simulation SFHo-20.0, as described in Sec. <ref>. § SUMMARY AND CONCLUSIONS In this paper, we have studied the SN 1987A cooling constraints on dark-sector models induced by the emission of new light dark fermions χ coupled to leptons. To provide a concrete framework, we consider general vector portal interactions arising from the exchange of a massive Z^' vector mediator. We focus primarily on the couplings to muons, which are predicted to have sizable number densities within the hot and dense environments of the proto-neutron star formed during core-collapse supernovae. However, we also extend our analysis to couplings to neutrinos and electrons. We have considered various mechanisms for the production and absorption of the χ's and different regimes that depend on the ranges of the parameters of the model. Firstly, the constraints depend on the mass of χ, as their pair production becomes Boltzmann suppressed for m_χ≫ T. Secondly, different regions of can be identified based on whether the dark fermions are resonantly produced or generated from the tail of the Z^' resonance. This distinction arises, for instance, when the Z^' is heavy and cannot be produced on-shell by the thermal fluctuations in the medium. Finally, there exist two distinct regimes of coupling values, depending on whether the dark fermions free-stream or become trapped within the PNS. Consequently, by analyzing the χχ̅ production within these two regimes, for given masses of the Z^' and the χ, we can determine the range of couplings that is excluded by the observations from SN 1987A. For Z^' particles with masses ≲ T∼10 MeV and massless χ, the observations from SN 1987A place constraints on the couplings between ∼10^-1 and ∼10^-9, for equal couplings of the vector mediator to leptons and dark fermions, cf. Fig. <ref>. However, the range of these bounds strongly depends on the Z^' mass and the specific lepton to which the Z^' couples. These calculations can be readily extended to explicit Z^'-models, for example motivated by a gauged lepton flavor symmetry with a dark sector charged under it. We specifically investigate the case of L_μ-L_τ, which has been proposed in the literature as a combined solution to the (g-2)_μ anomaly, the Hubble tension and the dark matter puzzle. The SN 1987A limit covers a large region of parameter space that overlaps with the forecasts of future experiments and with part of the region that could address some of the tensions, see Fig. <ref>. On the other hand, when the Z^' mass is larger than the temperature and chemical potentials in the PNS, the interactions mediated by the Z^' can be accurately described by effective operators. This allows us to generalize the analysis to completely generic heavy portal interactions between dark fermions and SM leptons, summarized in Fig. <ref>. We find that SN 1987A cooling can probe new-physics scales up to 4-7 TeV (cf. Table <ref>), which surpasses current bounds from laboratory experiments by an order of magnitude (see Fig. <ref>). We emphasize that our analysis is not complete when a light Z^' is coupled to electrons. In this case, bremsstrahlung processes are expected to provide the dominant contributions to the emission of dark fermions, and plasma effects can have a significant impact on the analysis. Nevertheless, we consider our results as an important step into this direction, which significantly extends previous studies in the literature. Finally, other aspects of SN physics could lead to constraints complementary to the ones obtained in this work (see e.g. Refs. <cit.>). In particular, recent efforts towards a better understanding of the effect of neutrino self-interactions <cit.> may enable a new application of the SN 1987A cooling bound, potentially extending the constraints in the trapping regime. § CODE AVAILABILITY We provide a minimal code example to test different parameter points of the L_μ-L_τ model at <https://github.com/spinjo/SNforMuTau.git>. § ACKNOWLEDGMENTS We thank Andrea Caputo, Pilar Coloma, Miguel Escudero, Patrick Foldenauer, Felix Kahlhöfer, Enrico Nardi, Uli Nierste, Filippo Sala and Stefan Vogl for useful discussions. We want to thank Robert Bollig and Hans Thomas Janka for sharing data of the simulations with us. This work is partially supported by project C3b of the DFG-funded Collaborative Research Center TRR257, “Particle Physics Phenomenology after the Higgs Discovery" and has received support from the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska -Curie grant agreement No 860881-HIDDeN. The work of C.A.M. is supported by the Office of High Energy Physics of the U.S. Department of Energy under contract DE- AC02-05CH11231. Work by JMC is supported by PGC2018-102016-A-I00, and the “Ramón y Cajal” program RYC-2016-20672. § GENERALITIES OF ABSORPTION RATES In the case of generic 2→ n processes, where a dark particle χ interacts with a particle b in the initial state, the resulting contributions Γ_χ^b to the absorption rate of χ can be approximated by Γ_χ^b ≈1/2 𝔤_χ E_χ∏_final iF_deg,i∫d^3p_b/(2π)^3 2E_b f_b d^3p_i/(2π)^3 2E_i ×(2π)^4 δ^4 (∑_i p_i) ∑_ spins |ℳ|^2 =∏_final iF_deg,i× n_b⟨σ v⟩_b , where ⟨·⟩_b denotes the thermal average taken over the particle b, defined by ⟨ X ⟩_b = 𝔤_𝔟/n_b∫d^3p_b/(2π)^3 f_b X . Furthermore, σ denotes the cross section of the process, v= √((p_b · p_χ)^2 - m_χ^2 m_b^2 )/E_b E_χ is the Møller velocity and we have approximated the Pauli-blocking effects by introducing the degeneracy factors defined in Eq. (<ref>). By performing the d^3p_χ integral in Eq. (<ref>), one arrives at 𝒞_ abs^b = n_χ⟨Γ_χ^b⟩_χ = n_χ n_b ⟨σ v⟩_χ b , where ⟨σ v⟩_χ b denotes the thermal average over the complete initial state kinematics, i.e. ⟨σ v⟩_χ b≡𝔤_χ𝔤_b/n_χ n_b ∫d^3 p_χ/(2 π)^3 f_χd^3 p_b/(2 π)^3 f_b σ (p_χ, p_b) v . The inverse process defines an analogous collision operator for production of χ's, 𝒞_ prod^b, which in conditions of thermal and chemical equilibrium reads 𝒞_ prod^b=𝒞_ abs^b . This detailed balance relation can then be used to estimate the MFP in the PNS ⟨λ (p_χ)⟩_χ = ⟨v_χ/Γ_χ^b⟩_χ≈⟨ v_χ⟩_χ/⟨Γ_χ^b⟩_χ = n_χ/𝒞_ prod^b⟨ v_χ⟩_χ . § CROSS SECTIONS FOR ANNIHILATION (2→ 2) In this appendix, we list cross sections for the 2→ 2 processes ℓℓ→χχ, χχ→ℓℓ (s-channel) and ℓχ→ℓχ (t-channel), where ℓ generically refer to any lepton, including neutrinos. For the effective interactions in Eq. (<ref>), a linear independent basis is given by the operators O_SS, O_PP, O_SP, O_PS, O_VV, O_AA, O_AV, O_VA, O_TT, O_T^'T and we find[We disagree with Ref. <cit.> on the t-channel recovering their results only in the limit m_ℓ, m_χ→ 0.] σ_ℓℓ→χχ = √(s-4 m_χ^2)/48 π s Λ_l^4 √(s-4 m_ℓ^2)[3 C_SS^2(s-4m_ℓ^2)(s-4m_χ^2) + 3 C_PP^2s^2 + 3 C_PS^2s(s-4m_χ^2) + 3 C_SP^2s(s-4m_ℓ^2) + 4 C_VV^2(s+2m_ℓ^2)(s+2m_χ^2) + 4 C_AA^2(s^2-4s(m_ℓ^2+m_χ^2)+28m_ℓ^2m_χ^2) + 4 C_VA^2(s+2m_ℓ^2)(s-4m_χ^2) + 4 C_VA^2(s-4m_ℓ^2)(s+2m_χ^2) + 8 C_TT^2(s^2+2s(m_ℓ^2+m_χ^2)+40m_ℓ^2m_χ^2) + 8 C_T^'T^2(s^2+2s(m_ℓ^2+m_χ^2)-32m_ℓ^2m_χ^2) - 24 C_AA C_PP s m_ℓm_χ + 144 C_VV C_TT s m_ℓm_χ] , σ_χℓ→χℓ = 1/48πΛ_l^4 s^3 [ C_SS^2 ( s^4 + 2s^3(m_ℓ^2+m_χ^2)-2s^2 (3m_ℓ^4 -14m_ℓ^2 m_χ^2 +3m_χ^4)+2s (m_ℓ^2 +m_χ^2)(m_ℓ^2-m_χ^2)^2+(m_ℓ^2-m_χ^2)^4) +C_SP^2 ( m_ℓ^4 -2m_ℓ^2 (m_χ^2-2s) +(s-m_χ^2)^2)( m_ℓ^4 -2m_ℓ^2 (m_χ^2 +s) +(s-m_χ^2)^2) +C_PS^2 (m_ℓ^4 -2m_ℓ^2 (s+m_χ^2) +(s-m_χ^2)^2)(m_ℓ^4 +m_χ^4 -2m_ℓ^2 (m_χ^2 +s) +s(s+4m_χ^2)) +C_PP^2 ( m_ℓ^2 -2m_ℓ^2 (s+m_χ^2) +(s-m_χ^2)^2 )^2 +2C_VV^2 ( 4s^4 -10s^3 (m_ℓ^2 +m_χ^2) +s^2 (9m_ℓ^2 +22m_ℓ^2 m_χ^2 +9m_χ^4)-4s(m_ℓ^2 +m_χ^2)(m_ℓ^2-m_χ^2)^2+(m_ℓ^2-m_χ^2)^4) +2 C_VA^2 ( s-(m_ℓ+m_χ)^2)( s-(m_ℓ-m_χ)^2)( m_ℓ^4 -2m_ℓ^2 (m_χ^2 +s) +(2s+m_χ^2)^2) +2 C_AV^2 ( s-(m_ℓ+m_χ)^2)( s-(m_ℓ-m_χ)^2)( 4s^2+2s(2m_ℓ^2-m_χ^2) +(m_ℓ^2-m_χ^2)^2) +2 C_AA^2 ( 4s^4 -4s^3(m_ℓ^2+m_χ^2) -s^2(3m_ℓ^4 -46m_ℓ^2m_χ^2 +3m_χ^4)+2s(m_ℓ^2+m_χ^2)(m_ℓ^2-m_χ^2)^2 +(m_ℓ^2-m_χ^2)^4) + 8 C_TT^2( 7s^4 -13s^3 (m_ℓ^2+m_χ^2) +2s^2 (3m_ℓ^4+26m_ℓ^2m_χ^2 +3m_χ^4)-s(m_ℓ^2+m_χ^2)(m_ℓ^2-m_χ^2)^2 + (m_ℓ^2-m_χ^2)^4) +8 C_T^'T^2 ( s-(m_ℓ+m_χ)^2)( s-(m_ℓ-m_χ)^2)( 7s^2 +s(m_ℓ^2+m_χ^2) +(m_ℓ^2-m_χ^2)^2) +4 C_SS(C_TT (m_ℓ^8-m_ℓ^6 (4 m_χ^2+s)+m_ℓ^4 m_χ^2 (6 m_χ^2+s)+m_ℓ^2 (-4 m_χ^6+m_χ^4 s-8 m_χ^2 s^2+5 s^3) +m_χ^8-m_χ^6 s+5 m_χ^2 s^3-2 s^4)-3 C_VV m_ℓ m_χ s (m_ℓ^4-2 m_ℓ^2 (m_χ^2-s)+m_χ^4+2 m_χ^2 s-3 s^2)) + 4 C_PP(3 C_AA m_χ s (m_ℓ^4-2m_ℓ^2 (m_χ^2+s)+(m_χ^2-s)^2)+ C_TT (m_ℓ^8-m_ℓ^6 (4 m_χ^2+s)+m_ℓ^4 m_χ^2 (6 m_χ^2+s) +m_ℓ^2 (-4 m_χ^6+m_χ^4 s-8 m_χ^2 s^2+5 s^3)+m_χ^8-m_χ^6 s+5 m_χ^2 s^3-2 s^4)) + 4(C_SP+C_PS)(C_T^'T (m_ℓ^8-m_ℓ^6 (4 m_χ^2+s)+m_ℓ^4 m_χ^2 (6 m_χ^2+s)+m_ℓ^2 (-4 m_χ^6+m_χ^4 s-8 m_χ^2 s^2+5 s^3)+m_χ^8 -m_χ^6 s+5 m_χ^2 s^3-2 s^4)+6 C_VA m_ℓ m_χ s^2 (m_χ^2-m_ℓ^2)) -4 C_VV(C_AA (m_ℓ^8-m_ℓ^6 (4 m_χ^2+s)+m_ℓ^4 m_χ^2 (6 m_χ^2+s)+m_ℓ^2 (-4 m_χ^6+m_χ^4 s-8 m_χ^2 s^2+5 s^3)+m_χ^8 -m_χ^6 s+5 m_χ^2 s^3-2 s^4)+18 C_TT m_ℓ m_χ s (m_ℓ^4-2 m_ℓ^2 (m_χ^2+s)+(m_χ^2-s)^2)) +72 C_AA C_TT m_ℓ m_χ s(m_ℓ^4-2 m_ℓ^2 (m_χ^2-s)+m_χ^4+2 m_χ^2 s-3 s^2) + C_VA( 144C_T^'T m_ℓ m_χ s^2 (m_ℓ^2-m_χ^2)-4 C_AV (m_ℓ^8-m_ℓ^6 (4 m_χ^2+s)+m_ℓ^4 m_χ^2 (6 m_χ^2+s) +m_ℓ^2 (-4 m_χ^6+m_χ^4 s-8 m_χ^2 s^2+5 s^3)+m_χ^8-m_χ^6 s+5 m_χ^2 s^3-2 s^4)) +144 C_AV C_T^'T s^2m_ℓm_χ(m_ℓ^2-m_χ^2) ] . We can also generalize the cross sections for the Z^' model in Eq. (<ref>) by including generic vector (V_i) and axial (A_i) couplings to the leptons (such that V_ℓ=1, A_ℓ=0 gives back the model in Eq. (<ref>)): σ_ℓℓ→χχ^V = g_ℓ^2 g_χ^2/12π s√(s-4m_χ^2/s-4m_ℓ^2)s+2m_χ^2/(s-m_Z'^2)^2 + m_Z'^2 Γ_Z'^2 ×[ V_ℓ^2 (s+2m_ℓ^2) + A_ℓ^2 (s-4m_ℓ^2)] . The t-channel for small Z' masses is more involved because the propagator depends on the Mandelstam variable t, over which one integrates to obtain the total cross section. Neglecting the Z' decay width, the resulting expression reads σ_χℓ→χℓ^V = g_ℓ^2 g_χ^2/8π[ (V_ℓ^2 +A_ℓ^2) 2s+m_Z'^2/sm_Z'^2 + 1/m_Z'^2V_ℓ^2 (m_Z'^4 +8m_ℓ^2 m_χ^2) + A_ℓ^2 m_Z'^2 (m_Z'^2-4m_ℓ^2)/m_ℓ^4 -2m_ℓ^2 (s+m_χ^2) +(s-m_χ^2)^2 +sm_Z'^2 -2V_ℓ^2 (s+m_Z'^2) +A_ℓ^2 (s + m_Z'^2 - 2m_ℓ^2)/m_ℓ^4 -2m_ℓ^2 (s+m_χ^2) +(s-m_χ^2)^2 ×logm_ℓ^4 -2m_ℓ^2 (s+m_χ^2) +(s-m_χ^2)^2 +sm_Z'^2/sm_Z'^2] . The cross sections for the inverse process χχ→ℓℓ can be obtained using σ_χχ→ℓℓ = s-4m_ℓ^2/s-4m_χ^2σ_ℓℓ→χχ . § CROSS SECTIONS FOR PHOTOPRODUCTION (2→ 3) IN THE EFT LIMIT In this appendix, we derive the cross sections for the photoproduction processes ℓ^-γ→ℓ^- χ̅χ with the effective operators given in Eq. (<ref>). For simplicity, we rewrite Eq. (<ref>) factorizing the Wilson Coefficients in terms of a leptonic and dark current, C_XY^l=X_lY_χ. We start with the most simple case of scalar interactions. The amplitude reads iℳ = e/Λ_ℓ^2ϵ_μ (p_b) u̅_χ (p_1) (S_χ + iP_χγ_5) v_χ (p_2) ×u̅_ℓ (p_3) [ (S_ℓ + iP_ℓγ_5 ) p_a + p_b +m_ℓ/(p_a+p_b)^2 - m_ℓ^2γ^μ + γ^μp_3 - p_b + m_ℓ/(p_3 - p_b)^2 - m_ℓ^2 (S_ℓ + iP_ℓγ_5)] u_ℓ (p_a) . The squared and spin-averaged amplitude factorizes in two contributions |ℳ|^2 = X (p_1, p_2) L (p_a, p_b, p_3) , where the X and L denote the traces over dark and SM particles, respectively. The phase space can be factorized dΦ_3 (p_a + p_b; p_1, p_2, p_3)= dm_12^2/2πdΦ_2 ( p_a + p_b; p_12, p_3) dΦ_2 ( p_12; p_1,p_2) , where we introduced s=(p_a+p_b)^2 and the momentum of the effective two-body system of dark particles p_12 = p_1 + p_2 with invariant mass m_12^2 = p_12^2. We start with the dark system, i.e. dΦ_2 ( p_12; p_1,p_2). The function X (p_1, p_2) is a scalar and can therefore only depend on the scalar product p_1 p_2, which can be rewritten in terms of m_12^2 using p_1 p_2 = (m_12^2 - 2m_χ^2)/2, leading to X (p_1, p_2) = X̃(m_12). Thus, we obtain ∫ dΦ_2 (p_12; p_1, p_2) X(p_1, p_2)= X̃(m_12)/8π√(1-4m_χ^2/m_12^2) . The second phase-space integral can be simplified as ∫ dΦ_2 (p_a+p_b; p_12, p_3) = 1/(4π)^2√(s) ×∫p̅_3 dp̅_3 δ(p̅_3 - √(s)/2β) dcosθ dϕ , with β = √(1- 2m_12^2 +m_ℓ^2/s + (m_12^2 - m_ℓ^2)^2/s^2) , where p̅_3 is the spatial component of the 4-vector p_3, i.e. p_3^2 = E_3^2 -p̅_3^2 = m_ℓ^2. The p_3-dependence in the function L(p_a, p_b, p_3) can be rewritten in terms of s, p̅_3 and cosθ in the center-of-mass (CM) frame using p_a p_3 = √(s)/2[ E_3 ( 1+m_ℓ^2/s) - p̅_3 cosθ( 1-m_ℓ^2/s)] , p_b p_3 = √(s)/2( 1-m_ℓ^2/s) (E_3 + p̅_3 cosθ) . One can now evaluate the dΦ_2 (p_a+p_b; p_12,p_3) integral. After the trivial integrals in dϕ and dp̅_3 one can perform the dcosθ integration analytically. At this point, we are only left with the m_12^2 integration, which can not be done analytically in the general case. Introducing the dimensionless variable β_χ=√(1-4m_χ^2/m_12^2) , and x = m_12^2 / m_ℓ^2, x_χ = m_χ^2 / m_ℓ^2, ŝ = s/m_ℓ^2, we arrive at the result σ = α m_ℓ^2/64π^2Λ_ℓ^41/(ŝ-1)^3 ŝ^2∫_4x_χ^(√(ŝ)-1)^2 dx x β_χ ×(β_χ^2S_χ^2 + P_χ^2 )[ S_ℓ^2 f_S (x, ŝ) + P_ℓ^2 f_P (x,ŝ)] , where the functions f_S, f_P are defined by f_S = 4ŝ^2 ( ŝ^2 -2ŝ(x-3) + 2x^2 -10x+9) T(x,ŝ), -( 3ŝ^3 +ŝ^2 (25-7x) +ŝ (5-2x) +x-1) R(x,ŝ) , f_P = 4ŝ^2 ( ŝ^2 -2ŝ(x+1) + 2x^2 -2x+1) T(x,ŝ), -( 3ŝ^3 - 7ŝ^2 (x+1) +ŝ(5-2x) +x -1) R(x,ŝ) , with, R(x,ŝ) = √(ŝ^2 -2ŝ (x+1) +(x-1)^2) , T(x,ŝ) = tanh^-1R(x,ŝ)/ŝ-x+1 . In the limit m_χ→ 0, the final integral can be evaluated analytically, σ =α m_ℓ^2/2304π^2 Λ_ℓ^41/(ŝ-1)^3 ŝ^2 ×( S_χ^2 + P_χ^2) ( S_ℓ^2 g_S (ŝ) + P_ℓ^2 g_P (ŝ)) , where g_S = -79ŝ^6 -14 ŝ^5 -189ŝ^4 -296ŝ^3 +527ŝ^2 +54ŝ -3 + 12 ŝ^2 ( 2ŝ^4 +10 ŝ^3 +9ŝ^2 +44ŝ+25) logŝ , g_P = - 79ŝ^6 +338ŝ^5 +675ŝ^4 -1160ŝ^3 +175ŝ^2 +54ŝ -3+12 ŝ^2 ( 2ŝ^4 +2ŝ^3 -63ŝ^2 -28ŝ+17) logŝ . The matrix element for vector-like interactions reads iℳ = e/Λ_ℓ^2ϵ_μ (p_b) u̅_χ (p_1) γ_ν (V_χ + iA_χγ_5) v_χ (p_2) ×u̅_ℓ (p_3) [ γ^ν (V_ℓ + i A_ℓγ_5 ) p_a + p_b +m_ℓ/(p_a+p_b)^2 - m_ℓ^2γ^μ + γ^μp_3 - p_b + m_ℓ/(p_3 - p_b)^2 - m_ℓ^2γ^ν(V_ℓ + iA_ℓγ_5)] u_ℓ (p_a) . After squaring and spin-averaging, one again obtains two traces for the SM and dark part of the amplitude, respectively. Due to the vector-like nature of the interaction, these traces are contracted with two Lorentz indices, one arising from ℳ and one from ℳ^† |ℳ|^2 = X_μν (p_1, p_2) L^μν (p_a, p_b, p_3) . Due to the fact that X_μν can not only depend on m_12^2, but also on p_1^μ, p_2^μ, we can not simply factor X_μν out of the dΦ (p_12; p_1, p_2) integral as in the scalar case. However, Lorentz covariance implies that the integral can only depend on the vector p_12^μ =p_1^μ +p_2^μ ∫ dΦ_2 (p_12; p_1, p_2) X^μν (p_1, p_2) = A_1 m_12^2 g^μν + A_2 p_12^μ p_12^ν , where A_1, A_2 are functions of m_12^2 A_1 = -( (V_χ^2 + A_χ^2) + 2m_χ^2/m_12^2 (V_χ^2 - 2A_χ^2)) , A_2 =(1+2m_χ^2/m_12^2) (V_χ^2 + A_χ^2) . From this point on, the calculation is equivalent to the scalar case. We arrive at the result σ = α m_ℓ^2/48π^2 Λ_ℓ^41/(ŝ-1)^3 ŝ^2 ×∫_4x_χ^(√(ŝ)-1)^2 dx β_χ( V_ℓ^2 f_V (x, x_χ, ŝ) + A_ℓ^2 f_A (x, x_χ, ŝ)) , with f_V =-4A_1 ŝ^2 x( ŝ^2-2ŝ (x+3) +2x^2 +2x-3) T(x,ŝ) -A_1 x ( ŝ^3+ŝ^2 (7x+15) +ŝ (2x-1) -x+1) R(x,ŝ) , f_A = -4A_1 ŝ^2 x( ŝ^2 -2ŝ(x-5) +2x^2 -14x+13)T(x,ŝ) -A_1 x( ŝ^3+7ŝ^2 (x-7) + ŝ(2x-1)-x+1) R(x,ŝ) +8 A_2 ŝ^2 ( ŝ^2 -2ŝ(x+1) +2x^2 -2x+1) T(x,ŝ) -2A_2 ( 3ŝ^3 -7ŝ^2 (1+x) +ŝ(5-2x)+x-1) R(x,ŝ) . In the limit m_χ→ 0, one can again perform the dx integration analytically and arrive at σ = α m_ℓ^2/1728 π^2 Λ_ℓ^41/ŝ^2 (ŝ-1)^3 × (V_χ^2 + A_χ^2) ( V_ℓ^2 g_V (ŝ) + A_ℓ^2 g_A (ŝ)) , with g_V = -55ŝ^6+682ŝ^5 + 483ŝ^4 -968ŝ^3 -169ŝ^2 +30ŝ -3+ 12ŝ^2( 2ŝ^4 -14ŝ^3 -87ŝ^2 -52ŝ+1) logŝ , g_A = -55ŝ^6-254ŝ^5 +219ŝ^4-296ŝ^3 +119ŝ^2 +294ŝ -27+12ŝ^2 ( 2ŝ^4+10ŝ^3 +33ŝ^2-4ŝ + 49)logŝ . Finally, the matrix element for tensor-like interactions reads iℳ = e/Λ_ℓ^2ϵ_μ (p_b) u̅_χ (p_1) σ_νρ (T_χ + T'_χγ_5) v_χ (p_2) ×u̅_ℓ (p_3) [ σ^νρp_a + p_b +m_ℓ/(p_a+p_b)^2 - m_ℓ^2γ^μ + γ^μp_3 - p_b + m_ℓ/(p_3 - p_b)^2 - m_ℓ^2σ^νρ] u_ℓ (p_a) . The squared and spin-averaged matrix element can be factorized as |ℳ|^2 = X_μνρσ (p_1, p_2) L^μνρσ (p_a, p_b, p_3) . We define the index ordering in X^μνρσ such that the first and second pair of indices each corresponds to one σ^μν in the trace that constitutes X^μνρσ, i.e. X^μνρσ is antisymmetric under μ↔ν and ρ↔σ. After proper antisymmetrization, there are only two Lorentz structures that satisfy this condition, leading to ∫ dΦ_2 (p_12; p_1, p_2) X^μνρσ (p_1, p_2) = m_12^2 B_1 ( g^μρg^νσ - g^νρ g^μσ) + B_2 ( p_12^μ (p_12^ρ g^νσ - p_12^σ g^νρ) - p_12^ν (p_12^ρ g^μσ - p_12^σ g^μρ)) , where B_1, B_2 are functions of m_12^2 B_1 = 1/12πβ_χ( (T_χ^2 + T'^2_χ) + 4m_χ^2/m_12^2 (2T'^2_χ - T_χ^2)) , B_2 = -1/6πβ_χm_12^2 + 2m_χ^2/m_12^2 (T_χ^2 + T'^2_χ) . After following the standard procedure as outlined for the scalar case, one arrives at the result σ = α m_ℓ^2/4πΛ_ℓ^41/(ŝ-1)^3 ŝ^2 ∫_4x_χ^(√(ŝ)-1)^2 dx (B_1 f_1 (x, x_χ, ŝ) + B_2 f_2 (x, x_χ, ŝ)) , where f_1 = 96ŝ^2( (ŝ-x+1) T(x,ŝ)- R(x,ŝ)) , f_2 = -( 4ŝ^4 + ŝ^3 (x-16) + ŝ^2 (7x^2 + 59 x+24) + ŝ (2x^2 + 7x-16) -x^2 -3x+4)R(x,ŝ) -4ŝ^2 x( ŝ^2 -2ŝ (x+9) +2x^2 + 14x - 15) T(x,ŝ) . In the limit m_χ→ 0, the final dx integration can be done analytically and we arrive at σ = α m_ℓ^2/864π^2 Λ_ℓ^41/(ŝ-1)^3 ŝ^2 (T_χ^2 + T'^2_χ) ×[ 17ŝ^6+370ŝ^5+675ŝ^4-344ŝ^3-1153ŝ^2+486ŝ -51 +12ŝ^2 ( 2ŝ^4-26ŝ^3 -27ŝ^2 -136ŝ+37) logŝ] . § FULL CALCULATION OF PHOTOPRODUCTION The energy-loss rate per unit volume for photoproduction ℓ^- (p_a)γ (p_b) →χ (p_1) χ (p_2) ℓ^- (p_3) can be written as Q = 1/32 π^4∫_m_ℓ^∞ d E_a p̅_a f_a ∫_0^∞ d E_b E_b f_b ×∫_-1^1 d c_θ J_s (s, E_a ,E_b) , J_s = ∫ d Φ_3 (p_a + p_b ; p_1, p_2, p_3) | M|^2 × (E_a + E_b - E_3) (1-f_3) , where d Φ_3 is the Lorentz-invariant 3-body phase space volume, p̅_a ≡ |p⃗_a| = √(E_a^2 - m_ℓ^2), f_a^-1= e^(E_a - μ_ℓ)/T+1, f_b^-1= e^E_b/T - 1, f_3^-1= e^(E_3 - μ_ℓ)/T+1, s = m_ℓ^2 + 2 E_b (E_a - p̅_a c_θ), c_θ≡cosθ and all energies refer to the PNS frame. Using the decomposition of the 3-body phase space in Eq. (<ref>), the 2-body phase space integral over the dark fermion momenta p_1 and p_2 can be easily performed, as the spin-summed, squared matrix element | M|^2 factorizes into two contributions, cf. Eq. (<ref>) M (p_a, p_b, p_3, m_12) ≡∫ d Φ_2 (p_12; p_1, p_2) | M|^2 . The remaining phase space integral d Φ_2 (p_a + p_b ; p_12, p_3) is suitably evaluated in the CM frame, giving ∫ d Φ_2 = 2 π/√(s)∫p̅_3^' d p̅_3^' d c_δ d ϕ/4 (2 π)^3δ (p̅_3^' - β√(s)/2) , where c_δ≡cosδ, p̅_3^' = √((E_3^')^2 - m_ℓ^2) with E_3^' the final lepton energy in the CM frame, ϕ and δ denote the polar and azimuthal angle of p⃗_3^ ' and β (s, m_12) is given in Eq. (<ref>). To evaluate the Lorentz-invariant quantity M (p_a, p_b, p_3, m_12^2) in this frame, we express the relevant Lorentz scalars in the CM frame, where p_a^' = (E_a^', 0, 0, E_b^'), p_b^' = (E_b^', 0, 0, - E_b^') p_a p_b = s-m_ℓ^2/2 , p_a p_3 = E_a^' E_3^' + E_b^'p̅_3^' c_δ , p_b p_3 = E_b^' E_3^' - E_b^'p̅_3^' c_δ , where the energies in the CM frame can be written in terms of s only using E_a^' =s + m_ℓ^2/2 √(s) , E_b^' = s-m_ℓ^2/2 √(s) . These expressions allow to write M (p_a, p_b, p_3, m_12) as a function of s, cosδ, E_3^' and m_12, and it remains to evaluate (E_a + E_b - E_3) (1-f_3) in the CM frame. E_3 is related to E_3^' by a Lorentz boost with velocity (p⃗_a + p⃗_b)/(E_a + E_b) ≡p⃗_ab/E_ab = p̅_ab/E_ab (0, s_η, c_η) with s_η≡sinη, c_η≡cosη and boost factor γ = E_ab/√(s) (since s = E_ab^2 - p̅_ab^2). The same boost relates E_b to E_b^', which can be used to obtain an expression for cosη as a function of s and the initial energies in the PNS frame c_η = s (E_b-E_a)+ m_ℓ^2 (E_b + E_a) /(s-m_ℓ^2) √((E_a + E_b)^2-s) . This angle determines the desired relation between E_3 and E_3^' as E_3 = 1/√(s)( E_ab E_3^' + p̅_abp̅_3^ ' (s_η s_ϕ s_δ + c_η c_δ ) ) . Having expressed all final state energies in the CM frame, one can finally perform the d p̅_3^' using the δ-function in Eq. (<ref>), giving p̅_3^' = β√(s)/2 , E_3^' = √(s)/2( 1 - m_12^2 - m_ℓ^2/s) . Putting everything together, we finally obtain for Eq. (<ref>) J_s = 1/64 π^3∫_4 m_χ^2^(√(s) - m_l)^2 dm^2_12 β (s, m_12) ∫_-1^1 d c_δ M(s,m_12^2, c_δ) ×∫_0^2 π dϕ (E_a + E_b - E_3) (1-f_3) , where β (s, m_12) is given in Eq. (<ref>), M(s,m_12^2, c_δ) is obtained from Eqs. (<ref>),(<ref>),(<ref>),(<ref>), and E_3 = E_3 (s, m_12, E_a, E_b, ϕ, δ ) from Eqs. (<ref>),(<ref>),(<ref>). When 1-f_3 is to good approximation independent of E_3, the d ϕ integration can be trivially carried out, giving an overall factor of 2 π and (s_η s_ϕ s_δ + c_η c_δ) → c_η c_δ in Eq (<ref>). § NUMERICAL RESOLUTION OF THE KINEMATIC INTEGRALS FOR THE ANNIHILATION RATES Throughout this work, we use Monte Carlo techniques <cit.> to evaluate the thermal phase space integrals numerically, e.g. Eq. (<ref>) for annihilation and Eq. (<ref>) for trapping. For processes where the Z' can be produced resonantly, the Breit-Wigner propagator yields a peak in the Mandelstam variable s located at m_Z' with width √(m_Z'Γ_Z')∼ g m_Z'. We want to calculate these integrals for very small couplings g∼ 10^-10, translating into very narrow peaks. When using only several thousands of points for the Monte Carlo estimate, it is unlikely that this estimate can resolve the peak structure. In particular, iterative algorithms like VEGAS are unable to adapt to the peak if they can not extract information about the peak from samples in early iterations. Fortunately, we have knowledge about the shape of the peak, allowing us to provide this information to the Monte Carlo sampler. To achieve this, we include the Mandelstam variable s, corresponding to the peak location, as one of the integration variables. Subsequently, we divide the integral into three regions. One of these regions includes only the peak m_Z'^2 - σ m_Z'Γ_Z'≤ s ≤ m_Z'^2 + σ m_Z'Γ_Z' , with σ∼ 5. The other two regions cover the regions to the left and to the right of the peak, respectively. This approach ensures that the sum of the three integrals effectively captures and resolves the peak. These integrals scale as ∝ g^2 when the contribution from the resonance dominates the integral, otherwise they scale as ∝ g^4. We can employ this observation to validate our approach of calculating the integrals, as shown in Fig. <ref>. JHEP
http://arxiv.org/abs/2307.00505v1
20230702073921
Decay widths and mass spectra of single bottom baryons
[ "H. García-Tecocoatzi", "A. Giachino", "A. Ramirez-Morales", "Ailier Rivero-Acosta", "E. Santopinto", "Carlos Alberto Vaquera-Araujo" ]
hep-ph
[ "hep-ph" ]
http://arxiv.org/abs/2307.03376v1
20230707040348
Weakly-supervised Contrastive Learning for Unsupervised Object Discovery
[ "Yunqiu Lv", "Jing Zhang", "Nick Barnes", "Yuchao Dai" ]
cs.CV
[ "cs.CV" ]
Journal of Class Files, Vol. 14, No. 8, August 2021 Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals Weakly-supervised Contrastive Learning for Unsupervised Object Discovery Yunqiu Lv,  Jing Zhang,  Nick Barnes,  Yuchao Dai Yunqiu Lv and Yuchao Dai are with School of Electronics and Information, Northwestern Polytechnical University and Shaanxi Key Laboratory of Information Acquisition and Processing, Xi'an, China. Jing Zhang and Nick Barnes are with School of Computing, Australian National University, Canberra, Australia. Yuchao Dai ([email protected]) is the corresponding author. This research was supported in part by National Natural Science Foundation of China (62271410) and by the Fundamental Research Funds for the Central Universities. Abstract 0.9 We review the modular flavor symmetric models of quarks and leptons focusing on our works. We present some flavor models of quarks and leptons by using finite modular groups and discuss the phenomenological implications. The modular flavor symmetry gives interesting phenomena at the fixed point of modulus. As a representative, we show the successful texture structure at the fixed point τ = ω. We also study CP violation, which occurs through the modulus stabilization. Finally, we study SMEFT with modular flavor symmetry by including higher dimensional operators. =================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Unsupervised object discovery (UOD) refers to the task of discriminating the whole region of objects from the background within a scene without relying on labeled datasets, which benefits the task of bounding-box-level localization and pixel-level segmentation. This task is promising due to its ability to discover objects in a generic manner. We roughly categorise existing techniques into two main directions, namely the generative solutions based on image resynthesis, and the clustering methods based on self-supervised models. We have observed that the former heavily relies on the quality of image reconstruction, while the latter shows limitations in effectively modeling semantic correlations. To directly target at object discovery, we focus on the latter approach and propose a novel solution by incorporating weakly-supervised contrastive learning (WCL) to enhance semantic information exploration. We design a semantic-guided self-supervised learning model to extract high-level semantic features from images, which is achieved by fine-tuning the feature encoder of a self-supervised model, namely DINO, via WCL. Subsequently, we introduce Principal Component Analysis (PCA) to localize object regions. The principal projection direction, corresponding to the maximal eigenvalue, serves as an indicator of the object region(s). Extensive experiments on benchmark unsupervised object discovery datasets demonstrate the effectiveness of our proposed solution. The source code and experimental results are publicly available via our project page at <https://github.com/npucvr/WSCUOD.git> Unsupervised object discovery, Weakly-supervised contrastive learning, Principal component analysis § INTRODUCTION Unsupervised object discovery (UOD) <cit.> refers to correctly localizing the semantic-meaningful objects in an image without any manual annotations. It is an important step for modern object detection or segmentation as it can not only reduce the effort of manual annotation in supervised-learning-based methods <cit.>, but also offer the pseudo label or initial position prompt for large-scale semi-supervised methods <cit.>. Although there exists research on unsupervised semantic segmentation <cit.>, or unsupervised instance segmentation <cit.>, we find the former usually relies on extra weak annotations, and the latter is mainly designed on top of unsupervised object discovery models. In this case, effective unsupervised object discovery models can relieve the labeling requirement, facilitating many downstream tasks, , semantic segmentation <cit.>, scene editing <cit.>, image retrieval <cit.>. As no manual annotation is provided, the mainstream of UOD is based on self-supervised methods, using either generative models via image-resynthesis <cit.> or self-supervised representation learning with instance discrimination <cit.>. We find that the image-synthesis solutions heavily rely on the generation quality of generative models, and they usually perform unsatisfactorily on images with complex background. The self-supervised model for object discovery,  DINO <cit.>, implements instance discrimination/classification <cit.>, where augmentations of the same image are regarded as the same class. Then the structures of objects are embedded with feature and attention map of the class token on the heads of the last layer, which can highlight the salient foreground objects from the image. Based on this, subsequent works <cit.> are designed to segment the objects using post-processing methods based on DINO features, such as seed expansion <cit.> or normalized cut <cit.>. However, as no semantic guidance is supplied, the attention map contains extensive noise for real-world scene-centric images, where the background can be inaccurately activated rather than the foreground. Further, as only image-level constraints are applied in DINO <cit.>, the generated representation focuses more on image-level similarity rather than pixel-level similarity. which is not ideal for semantic-dominant unsupervised object discovery. These two main issues,  inaccurate background activation and less informative semantic representation learning, drive us to find a self-supervised representation learning method specifically for unsupervised object discovery, that highlights the semantically meaningful regions of the objects and suppresses background activation. To achieve this, we incorporate weakly-supervised contrastive learning (WCL) <cit.> into the self-supervised learning framework <cit.>, where the involvement of WCL leads to semantic guided feature representation, avoiding the noisy foreground activation in existing solutions <cit.>. Specifically, as DINO <cit.> is effective in learning knowledge about objects from numerous object-centric images from ImageNet <cit.>, we first employ the ViT model <cit.> trained by DINO as our feature extraction encoder. To further enhance the invariant properties of the object in the representation of scene-centric images, we fine-tune the self-supervised model with weakly-supervised contrastive loss <cit.> on a scene-centric dataset without manual annotations,  images from the DUTS <cit.> training dataset. The naive weakly-supervised contrastive loss is designed based on image-level positive/negative pairs to encourage the object-centric representation of images. In our dense prediction setting, we design an alignment loss to enforce the pixel-level semantic coherence in the overlapping part of different views. With the proposed weakly-supervised contrastive learning, the intrinsic semantic information is embedded and becomes prominent in the image representation, while the background is suppressed. Although the foreground representation can still vary, the suppressed background representation becomes similar with lower variance. We then perform principal component analysis (PCA) <cit.> to extract the principal component of the image representation as the descriptor of the object, and localize the object by finding the pixels with a high correlation with the object descriptor. In summary, our contributions are threefold. * We extend weakly-supervised contrastive learning to unsupervised object discovery for the first time and introduce an alignment loss to keep the pixel-level semantic consistency for object segmentation. * We propose an effective method to extract the most discriminative region of the image representation via principal component analysis. * Experiments on unsupervised object segmentation, unsupervised object detection and video object detection tasks demonstrate the effectiveness of our proposed solution. § RELATED WORK §.§ Unsupervised Object Discovery Unsupervised object discovery aims to generate class-agnostic bounding boxes or segmentation maps to capture semantically-meaningful objects to be used in downstream vision tasks. As pointed out in <cit.>, the ability of deep neural networks heavily depends on large-scale labeled datasets, and reducing the dependency has been tackled from two directions,  generative methods via image-resynthesis, and self-supervised representation learning. Generative models <cit.> are achieved via image-resynthesis,  <cit.>, introduce latent codes for object and background respectively, and then recover the image collaboratively with an energy-based model <cit.> or generative adversarial nets (GAN) <cit.>. Other methods <cit.> aim to find an interpretable direction representing the semantic meaning of saliency lighting or background removing in the latent space of a GAN and obtain object segmentation by adjusting along the latent direction. However, the dependency on reconstruction quality makes object discovery difficult when the training dataset has complex layout and multiple object categories. Self-supervised representation learning based on instance discrimination aims to preserve the discriminative features of the object, and discover the object based on an objectness prior. DINO <cit.> trained on ViT <cit.> has shown superior performance on object segmentation even compared with supervised methods. Then, LOST <cit.> and TokenCut <cit.> explore the direct usage of DINO features on segmentation through seed expansion and normalized cut respectively. LOST <cit.> localizes the object following the assumption that patches in an object correlate positively with each other and negatively with the background, and the foreground covers less area. TokenCut <cit.> uses spectral clustering on the DINO feature and selects the region with maximal eigenvalue as the object. Xie <cit.> employ image retrieval and design both image-level and object-level contrastive learning to encourage the semantic consistency in similar objects. Vo <cit.> use self-supervised features and obtain the object through PageRank on a proposal-based graph. Hénaff <cit.> propose to simultaneously optimize object discovery and self-supervised representation networks and these two tasks benefit from each other. With pre-trained self-supervised model, other methods use coarse masks generated from the affinity matrix of the features from the pre-trained models as the pseudo label to further train the segmentation/detection model <cit.>, or to further solve the graph optimization problem <cit.>. In our setting, we also use the DINO pre-trained model, but we have developed an object-centric representation learning method based on weakly-supervised contrastive learning to prevent the interruption from complex background. §.§ Self-supervised Object-Centric Learning The goal of self-supervised object-centric learning is to learn task-agnostic image inherent features. In this paper, we mainly discuss instance discrimination, which learns a distinguished representation from data augmentations by considering the augmented views generated from the same image as the same class. BYOL <cit.> learns a high-quality visual representation by matching the prediction of a network to the output of a different view produced by a target network. DINO <cit.> extends the work with a different loss and employs ViT as the network architecture. SimSiam <cit.> improves BYOL with contrastive learning in a Siamese structure. Since naive pairwise feature comparison induces class collapse, many approaches simultaneously cluster the features while imposing intra-cluster consistency. SwAV <cit.> assigns features to prototype codes and allows codes of two views to supervise each other. Wen <cit.> extend the prototype idea to pixel level. WCL <cit.> also uses the clustering technique to generate semantic components and performs contrastive learning on different components. WCL has a simpler setting but comparable performance with prototype-based models. Since the object usually has the highest information density in an image, we argue that self-supervised representations can be further explored in object discovery. §.§ Unsupervised Saliency Detection Unsupervised saliency detection indicates methods that find and segment the generic objects that draw human attention most quickly, without any supervision from a large-scale dataset. Early works <cit.> use hand-crafted features and localize the salient objects based on saliency priors such as contrast prior <cit.>, center prior <cit.> or boundary prior <cit.>. The modern literature <cit.> uses a pseudo saliency map in the training of a deep segmentation network. SBF <cit.> proposes to train a network by using the pseudo labels fused by multiple traditional methods on super-pixel and image levels. USD <cit.> designs a noise prediction module to eliminate the effect of noise caused by multiple hand-crafted pseudo masks in training. This idea was extended by employing alternating back-propagation in <cit.> to learn the saliency map only from a single pseudo mask. However, these methods still rely on a backbone model pretrained on the labeled ImageNet dataset. SelfMask <cit.> leverages self-supervised features from MoCo <cit.>, SwAW <cit.> and DINO <cit.> and selects the salient masks based on object priors by a voting mechanism. In the following, we show that unsupervised object discovery can provide an effective baseline for unsupervised saliency detection. §.§ Uniqueness of Our Solution Given the pre-trained self-supervised model, such as the ViT model trained by DINO <cit.>, existing methods <cit.> discover the objects by using the compactness property of the foreground object to localize the object and employ the affinity matrix to explore the whole region of the object. However, the performance of these methods is restricted to the performance of DINO pre-trained model, making them vulnerable to complex background. The direct method to address this problem is finetuning the model with scene-centric images. However, due to the noises introduced by the cluttered background, the performance on finetuned DINO-based model becomes even worse according to our experiment in Table <ref>. The uniqueness of our method is that we have designed a more effective finetuning strategy based on weakly-supervised contrastive learning <cit.> to suppress the activation of the background. The proposed pixel-level alignment loss inspired by <cit.> can keep the pixel-level consistency of the object by using images from a scene-centric dataset <cit.>. Further, our method has a simple pipeline, consisting of finetuning the ViT pre-trained model with WCL and PCA on the learned features. Different from other methods that reply on training for unsupervised segmentation/detection <cit.>, our method does not need any pseudo segmentation or bounding box annotations, making it convenient for our method to generalize to these methods by functioning as the pseudo label for downstream tasks. § OUR METHOD We work on unsupervised object discovery <cit.>, aiming to localize generic object(s) of the scene. Without access to ground truth annotations, we rely on self-supervised DINO models to roughly identify the candidate object(s) (Sec. <ref>). Due to the absence of high-level semantic information modeling, we find the transformer features trained by DINO fail to encode accurate semantic information for unsupervised object discovery. We then introduce weakly-supervised contrastive learning <cit.> to the self-supervised learning framework with pixel-level alignment loss to learn a semantic dominated image representation with rich pixel-level semantic correlation modeling (Sec. <ref>). Based on the learned feature, principal component analysis is introduced to generate the pixel-wise object segmentation map (Sec. <ref>). An overview of our proposed method is shown in Fig. <ref>. §.§ Self-supervised Representation Learning Recent work based on self-supervised transformers,  DINO <cit.>, has shown impressive performance on the task of unsupervised object discovery. DINO trains a visual transformer <cit.> in a self-supervised manner to generate the image representation. The model is formulated with a teacher-student network that accepts two views of the same image as input and urges the output of the student network with parameters θ_s to match that of the teacher network with parameters θ_t. During training, parameters of the student network (θ_s) are updated via the loss function: ℒ_dino=1/2(CE(P_t(𝐱), P_s(𝐱') + CE(P_t(𝐱'), P_s(𝐱)), where P_t and P_s indicate the Softmax outputs generated by teacher and student network respectively. 𝐱 and 𝐱' are different views of the same image. CE(·,·) computes the cross-entropy loss. Parameters of the teacher network are obtained by an exponential moving average (EMA) on the student weights,  a momentum encoder <cit.>, leading to: θ_tλθ_t+(1-λ)θ_s, where λ is designed to follow a cosine schedule from 0.996 to 1 during training <cit.>. DINO has demonstrated that self-supervised ViT features <cit.> contain more explicit scene layout and object boundaries compared with supervised training. Thresholding its attention maps based on the token of the last layer can also generate a high-quality object segmentation map. Further, methods such as LOST <cit.> and TokenCut <cit.> that operate directly on the vanilla DINO features have achieved favorable results on simple images for object discovery. However, as both methods assume DINO features are reliable in representing generic object(s), they fail to localize objects in complex scenarios when DINO features are no longer reliable. As it is pretrained on object-centric images in ImageNet, the DINO representation is usually noisy especially in natural scene-centric images. Additionally, the training only matches different views of the same image without considering inter-image relationships, making it vulnerable to the complex components in scene-centric images. We then resort to contrastive learning <cit.> to learn inter-image correlation for effective image representation learning. §.§ Semantic-guided Representation Learning We introduce weakly-supervised contrastive learning to our unsupervised object discovery task to achieve semantic-guided representation learning. Contrastive learning: Contrastive loss <cit.> was introduced for metric learning, which takes pair of examples (𝐱 and 𝐱') as input and trains the network (f) to predict whether they are similar (from the same class: 𝐲_𝐱=𝐲_𝐱') or dissimilar (from different classes: 𝐲_𝐱≠𝐲_𝐱'). Contrastive learning has self-supervised or unsupervised approaches, aiming to learn a feature representation f_𝐱=f_θ(𝐱) for an image 𝐱, where f_𝐱 can represent 𝐱 in low-dimension space for downstream tasks,  image classification and semantic segmentation. To achieve self-supervised learning, positive/negative pairs are constructed via data augmentation techniques <cit.>, where the basic principle is that similar concepts should have similar representations, and thus stay close to each other in the embedding space. On the contrary, dissimilar concepts should stay apart in the embedding space. Taking a step further, triplet loss <cit.> achieves metric learning by using triplets, including a query sample (𝐱), its positive sample (𝐱^+) and negative sample (𝐱^-). In this case, the goal of triplet loss is to minimize the distance between 𝐱 and its positive sample 𝐱^+, and maximize the distance between 𝐱 and its negative sample 𝐱^-. A key issue for triplet loss is that it only learns from one negative sample, ignoring the dissimilarity with all the other candidate negative samples, leading to unbalanced metric learning. To solve this problem, <cit.> introduces an N-pair loss to learn from multiple negative samples. Similarly, InfoNCE loss <cit.> uses categorical cross-entropy loss to identify positive samples from a set of negative noise samples <cit.>, which optimizes the negative log-probability of classifying the positive sample correctly via: ℒ_N C E=-logexp(sim(𝐳_i, 𝐳_j) / τ)/∑_k=1^N 1_[k ≠ i]exp(sim(𝐳_i, 𝐳_k) / τ), where (𝐳_i, 𝐳_j) is the positive pair whose elements are the feature embeddings of image augmentation views from the same image, while (𝐳_i, 𝐳_k) are either the positive pair or the negative pair, sim(𝐳_i, 𝐳_j)=𝐳_i^T𝐳_j/𝐳_i𝐳_j is the dot product between the ℓ_2 normalized feature embedding of 𝐳_i and 𝐳_j. Each embedding is produced by a projection head after the encoder network. Weakly-supervised contrastive learning: As contrastive learning treats all negative samples equally, class collision can occur where the model simply pushes away all negative pairs (which may be positive semantically) with equal degree. The main motivation of weakly-supervised contrastive learning <cit.> is to avoid the class collision problem. Specifically, a two-projection heads based framework following SimCLR <cit.> was introduced to achieve semantic guided representation learning. Then, weakly-supervised contrastive learning <cit.> assigns the same weak labels to samples that share similar semantics and uses contrastive learning to push apart the negative pairs with different labels. The label assignment is obtained by a connected component labeling (CCL) process. Specifically, a 1-nearest neighbour graph G=(V,E) is constructed from a batch of samples. Each vertex is the feature embedding of each image, which is produced by a projection head with DINO feature as input. The edge e_ij=1 if sample i is the 1-nearest neighbor of sample j and vice versa. Graph segmentation is then achieved by the the Hoshen-Kopelman algorithm <cit.>. With the CCL process, samples of the same component after graph segmentation are given the same weak label. For implementation, in addition to the base encoder f(·) and projection head g(·) used by SimCLR <cit.>, <cit.> introduces an auxiliary projection head ϕ(·) which shares the same structure as g(·). The goal of ϕ(·) is to explore similar samples across the images and generate weak labels y∈ℝ^N× N as a supervisory signal to attract similar samples, where N is the size of the minibatch, y_i,j=1 means x_i and x_j are similar, and y_i,j=0 indicates x_i and x_j are a negative pair. Given the weak label y∈ℝ^N× N, <cit.> achieves supervised contrastive loss using: ℒ_sup=1/N∑_i=1^Nℒ^i, where ℒ^i is defined as: ℒ^i=-∑_j=1^N 1_𝐲_i,j=1logexp(sim(𝐯_i, 𝐯_j) / τ)/∑_k=1^N 1_[k ≠ i]exp(sim(𝐯_i, 𝐯_k) / τ). (𝐯_i,𝐯_j) are the embeddings of the positive pair within the same semantic group, generated by the projection head ϕ(·). τ regularizes the sharpness of the output distribution. Similarity regularization: With the weakly-supervised contrastive loss function in Eq. (<ref>), similar samples,  𝐯_i and 𝐯_j, should have similar representations. To avoid error propagation due to less accurate graph segmentation, similarity regularization is applied to guarantee that the similarity between 𝐯_i and 𝐯_j is not extremely high. Particularly, we perform two types of augmentations for samples in the mini-batch, named V_1 and V_2, and the semantic groups are swapped to supervise each other. Consequently, we define the graph loss ℒ_graph as: ℒ_graph =ℒ_sup (V^1, 𝐲^2)+ℒ_sup (V^2, 𝐲^1), where 𝐲^1 and 𝐲^2 are the weak labels of V^1 and V^2, respectively, and ℒ_sup computes the weakly-supervised contrastive loss as shown in Eq. (<ref>),  ℒ_sup =1/B∑_i=1^Bℒ^i, with B as batch size. Semantic enhanced representation learning: To enhance semantic-level correlation for effective representation learning, we finetune a pre-trained DINO <cit.> model with an object-centric dataset,  images from DUTS <cit.> dataset, via weakly-supervised contrastive learning. There are two main components in the weakly-supervised contrastive learning framework: the vanilla contrastive loss as shown in Eq. (<ref>), that targets at learning instance discrimination; and the graph loss in Eq. (<ref>), aiming to find common features among images sharing similar semantic information. As the vanilla contrastive loss and weakly-supervised contrastive loss only measure the similarity at image-level, the pixel-level semantic alignment loss is designed to further explore the object region by keeping the pixel-level consistency of the same object across different views of the same image. Assuming 𝐱_i and 𝐱_j are different views generated by applying data augmentation on the same image, we apply the backbone network f and the projection head η to obtain their features 𝐬_i=η(f(𝐱_i^o)) and 𝐬_j=η(f(𝐱_j^o)). Then 𝐱_i and 𝐱_j are projected into the original image by removing the data augmentation in order to obtain their overlapping regions and the corresponding features are cropped from 𝐬_i and 𝐬_j. After that, ROIAlign <cit.> is used to align the features to the same scale. Therefore, we can get 𝐬_i^o=ROIAlign(CROP(𝐬_i)) and 𝐬_j^o=ROIAlign(CROP(𝐬_j)). For simplicity, we eliminate the superscript and denote them as 𝐬_i and 𝐬_j. The alignment loss is then defined as: ℒ_align=1/chw∑_c∑_p,q(CE(𝐬_i^(p,q), 𝐬_j^(p,q)) +CE(𝐬_j^(p,q), 𝐬_i^(p,q))), where (p,q) is the coordinate, 𝐬_i^(p,q)∈ℝ^c is the feature vector on position (p,q), indicating pixel-to-pixel alignment, c,h,w indicate channel dimension, height and width of the feature 𝐬, respectively. With both the weakly-supervised contrastive loss to explore inter-image semantic correlations, and alignment loss for object-emphasized representation learning, we obtain our final objective function as: ℒ=ℒ_NCE+αℒ_graph+βℒ_align, where α and β are set empirically as α=5 and β=1. §.§ Object Discovery based on PCA With the proposed semantic-guided weakly-supervised contrastive learning, the intrinsic semantic information is embedded and becomes prominent in the image representation, while the background is suppressed. We further introduce principal component analysis (PCA) <cit.> to our framework to extract object regions for our unsupervised object discovery task. As illustrated in the bottom-right of Fig. <ref>, the principal descriptor of the image obtained from the vanilla DINO feature <cit.> contains too much noise from the image background, which is undesirable for object discovery. Although existing solutions <cit.> can segment the foreground based on clustering techniques,  spectral clustering, their dependence on the similarity matrix leads to their inability to discover heterogeneous foreground objects, as can be seen at the top of the second column in Fig. <ref>. Instead of relying on computing pixel-level/patch-level similarity, we use PCA to find the main descriptor of the image for object discovery. A similar idea is also applied in deep descriptor transformation (DDT) <cit.>. However, given a group of images, they use PCA to discover the inter-image semantic correlation, and we use PCA within a single image to find the dominant object(s) of the scene. Given features 𝐱 of the image, 𝐱∈ℝ^c× wh, we define the mean and covariance matrix of the features as: 𝐱=1/K∑_p,q𝐱_(p,q), Cov(𝐱)=1/K∑_p, q(𝐱_(p, q)-𝐱)(𝐱_(p, q)-𝐱)^⊤, where K=h× w, and 𝐱_(p,q) indicates the feature vector with position coordinate (p,q). We perform PCA on the covariance matrix Cov(𝐱). PCA extracts the principal direction corresponding to the principal component of the features with the largest variance <cit.>. Furthermore, as the object-centric representation has been generated by our semantic-guided learning network, we assume that the objects are the most distinctive component in the image representation. In this case, the region that has a high correlation with the principal direction is regarded as the object region. For each pixel, the projection map is calculated by: 𝐦=ξ_1^⊤(𝐱-𝐱), where ξ_1∈ℝ^c× 1 represents the eigenvector corresponding to the maximal eigenvalue. As shown in Fig. <ref>, the projection map of the 1^st eigenvector (1^st eigvec) with the largest eigenvalue can highlight the object, while the 2^nd and the 3^rd pay more attention to the background. Therefore, we use the projection map of the first eigenvector as our object discovery map. The final binary segmentation map is produced by thresholding the projection map with a threshold of 0.5, which is the median value of the output map. § EXPERIMENTAL RESULTS We evaluate our method in class-agnostic unsupervised object discovery in two forms: 1) Object Detection task to predict the bounding box of the object, and 2) Object Segmentation task to predict a binary segmentation map, including single-category object segmentation and generic-category salient object detection. Following the conventional practice from <cit.>, we also evaluate our method on video object detection. §.§ Setting Evaluation Metrics: Object Segmentation Metrics: 1) F-measure: F_β = (1+β^2) Precision× Recall/β^2 Precision + Recall, where F^max_β is reported as the maximal value of F_β with 255 uniformly distributed binarization thresholds; 2) IoU (Intersection over Union) computes the overlapping between the ground-truth segmentation mask and binary predicted mask; 3) Accuracy measures the percentage of the correctly predicted pixels; 4) Jaccard index indicates the average IoU of the dataset. The threshold to generate binary masks for IoU and Accuracy is set to 0.5; Object Detection Metrics: 1) CorLoc is employed to measure the percentage of an object whose IOU with any ground truth bounding box in an image is larger than 0.5; 2) AP@50 is the average precision with 50% as the threshold for IoU, which is used for multiple object detection. Implementation Details: We select DINO-ViT-Small <cit.> as the self-supervised transformer backbone, with patch size 16. During training, all images are resized to 224× 224. The base learning rate is set to be 7.5e-3 and updated by cosine rate decay. Maximum epoch is set to 200. It takes an hour with batch size 48 on 6 NVIDIA GeForce RTX 3080 GPUs. The inference time for each image with resolution 480× 480 is around 0.221s. Further, our model brings 2M extra parameters to our DINO-ViT-S/16 backbone. §.§ Performance Comparison Object Detection: We compare our proposed method with the state-of-the-art object detection methods <cit.>, and show CorLoc of the related models in Table <ref>. To train our model (Ours), we fine-tune DINO with RGB images from the DUTS <cit.> training dataset. Except Freesolo <cit.>, the other compared methods in Table <ref> are achieved directly on the features extracted from a fully/self-supervised backbone network pre-trained on ImageNet <cit.>, while Freesolo <cit.> is trained on COCO and COCO  <cit.> with a total of 241k training images. Since multiple objects are highlighted by our method, we generate the bounding box from the segmentation map by drawing the external rectangles containing both the whole foreground region and its connected components, where extremely small components are deleted. Specifically, the steps of the bounding box generation process are: * The foreground region is partitioned by the 8-neighbour connected component analysis. * Small connected components with area smaller than 0.25% of the image are deleted. * The components whose IoU with the complete foreground region is larger than 0.7 are deleted. * The external rectangles for the remaining connected components and the whole foreground regions are drawn as the bounding boxes. Differently, previous methods <cit.> generate bounding boxes from the segmentation map by keeping the maximal connected component and filtering out other regions, leading to extensive false negatives. According to Table <ref>, our method exceeds the performance of other methods in terms of CorLoc score. From Table <ref>, we can also infer that, compared with the methods based on hand-crafted features and supervised features, the methods based on self-supervised image representation,  LOST <cit.>, TokenCut <cit.>, DeepSpectral <cit.> and our method achieves better results. However, due to the limited representation ability of the vanilla DINO feature, LOST <cit.>, TokenCut <cit.>, DeepSpectral <cit.> have inferior performance to our method. Specifically, we improve SOTA by 1.8% and 5.7% on VOC2007 and COCO20K and achieve comparable value with TokenCut on VOC2012. In Fig. <ref>, we also show the visualization results of our method and three existing techniques. It illustrates that the bounding box results of our method enclose more complete regions of the object than other approaches and contain less background. In addition, as shown in the bottom row of Fig. <ref>, the proposed method works better in discovering multiple objects in the image compared with others. However, a falsely activated bounding box including all objects is also generated. Future work should be developed to address this issue. Class-Agnostic Multiple Object Detection: To be more specific on how our model performs on multiple objects scenarios, we follow the previous techniques <cit.> to report the Class-Agnostic multiple object Detection (CAD) results in Table <ref>, where the models are trained on VOC07  <cit.>, VOC12  <cit.>, COCO20K dataset <cit.> and the testing is performed on VOC07 , VOC12 and COCO20K, respectively. Note that we report performance of the related models based on the provided numbers for multiple object detection. To obtain results for multiple object detection, one can directly generate multiple bounding boxes from the segmentation map as discussed in the previous section. However, the generated bounding boxes are too noisy to be directly used for multi-object detection performance evaluation. Firstly, the connected instances will be enclosed in the same bounding box, leading to extensive false positive. Secondly, the object that is occluded by other objects will be split into multiple bounding boxes, which is the main cause of false negative. In addition, the CorLoc metric is insensitive to multiple objects and redundant objects. To mitigate the undesirable impact of noisy bounding boxes and better demonstrate the efficacy of our method on multiple object detection, the conventional practice is to train the class-agnostic Faster R-CNN model <cit.>, in which the object discovery bounding boxes are employed as the pseudo label for supervised learning, and performance in Table <ref> is obtained with extra training of a class-agnostic Faster R-CNN model <cit.>. As illustrated in Table <ref>, performance of our model is comparable to the state-of-the-art multi-object detection model, namely UMOD <cit.>. Differently from our solution, UMOD <cit.> is also based on DINO <cit.> but has a more complex pipeline, including clustering of superpixels, merging the similar regions, training the foreground-background classifier, denoising the coarse masks and training the class-agnostic Fast R-CNN. Object Segmentation: We also compare the proposed method with the existing object segmentation methods[We test the missing metrics of the competing methods on our machine following their official implementation.] <cit.> as shown in Table <ref>, where we train the model using images from the DUTS training dataset, and testing using CUB-200-2011 (CUB) <cit.> for single-category segmentation, DUTS <cit.>, ECSSD <cit.> and DUT-OMRON <cit.> datasets for multiple-category segmentation. Among the generative solutions, ReDO <cit.> and DRC <cit.> train generative adversarial network (GAN) <cit.> or energy-based model (EBM) <cit.> on the testing dataset of single object detection dataset such as CUB <cit.> and Flowers <cit.> or simple multiple object detection dataset CLEVR <cit.>. BigBiGAN <cit.> and FindGAN <cit.> train GAN model on ImageNet <cit.>. Other methods use the pre-trained backbone in DINO <cit.>. We only report the results of <cit.> and <cit.> on the CUB bird dataset as their models were trained on a single-category dataset. Table <ref> demonstrates that the existing methods based on self-supervised image representation outperform both the generator methods <cit.> and those generated from human-interpretable directions in the latent space of GANs <cit.> in both single-category dataset CUB and other generic-category salient object detection datasets. Among all the related models, our method achieves the best result in all metrics before mask refinement. We also show superior performance compared with the state-of-the-art in the majority of metrics after refinement with a bilateral solver <cit.>. Fig. <ref> illustrates the visualization results of our method in object segmentation. Compared with other algorithms, our method captures the prominent object even in complex surroundings. Bilateral Solver <cit.> Analysis: The bilateral solver <cit.> is employed to post-process our coarse segmentation mask following TokenCut <cit.>. Although it is effective in most situations, according to the improvement shown in Table <ref>, there is still some negative influence on some images, as depicted in Fig. <ref>. This is because bilateral solver segments object depending on the local structure of the image, and then the precision of the segmentation mask degrades when the local difference is large. Therefore, the unstable nature of the bilateral solver on segmentation is the main reason that our refined results is inferior to TokenCut <cit.> in some metrics. As a future work, we will be dedicated to improve the optimization objective in bilateral solver for more stable prediction refinement. Video Object Segmentation: We also illustrate the generalization of our method by comparing our method with the training-free method TokenCut_video <cit.>, where our model is still trained with images from DUTS <cit.> training dataset. We evaluate the model on the video dataset DAVIS2016 <cit.> (50 sequences, 3455 frames), SegTrackv2 <cit.> (14 sequences, 1066 frames) and FBMS <cit.> (59 sequences, 7,306 frame, the annotation is provided every 20 frames). To begin with, we obtain the RGB feature and Flow feature generated from RGB image and RAFT Flow image <cit.>. Then we compute the covariance on these two features following Eq. (<ref>), denoted as Cov and CovF. Next, the first component is obtained by calculating the eigenvector of the matrix λ_1·Cov+λ_2·CovF, where λ_1=0.5, λ_2=1.5 in our setting. Since video segmentation should consider the inter-frame relationship, the eigenvector is computed by concatenating the features of multiple frames (20 frames in our experiment). From Table <ref>, the performance is superior than TokenCut_video on DAVIS <cit.> and FBMS <cit.> dataset and a bit inferior on SegTrackv2 <cit.> dataset. This is because SegTrackv2 is a small dataset that only contains 1066 frames and has more bias comparing with DAVIS and FBMS. In addition, the eigenvectors are calculated only once every 20 frames, and the speed can reach 20 FPS. §.§ Ablation Study We have also conducted extra experiments to comprehensively analyse different components of the proposed strategy, and show performance in Table <ref>. Vanilla DINO in Table <ref> indicates that the object region is discovered by employing PCA or spectral clustering directly on the original DINO feature and finetuned DINO represents that the backbone is further trained using images of the DUTS training dataset <cit.>, following the original loss in DINO <cit.>. Row 1-4 in Table <ref> show that the finetuned DINO model has even worse performance in localizing objects than the original DINO model. Our method is then proposed to solve the issue that the representation is disturbed by complicated backgrounds when training on scene-centric images. Then, we show the importance of the losses in our method. In Table <ref>, NCE Loss and Graph Loss denote that only NCE Loss defined in Eq. (<ref>) or Graph Loss defined in Eq. (<ref>) is used in finetuning the vanilla DINO-pretrained backbone, respectively. We can see that their performance are both improved compared with vanilla DINO+PCA, but still inferior to vanilla DINO+SC. The reason for not using alignment loss in Eq. (<ref>) alone is that training the network with it alone cannot converge. In addition, the performance drops if one of the losses in our semantic-guided representation learning method is deleted, which demonstrates that all of these losses are essential. §.§ Discussions Using other self-supervised pre-training model: The objective of our method is to make the model features concentrate on the object. Therefore, it could be generalized to other self-supervised learning methods. To evaluate its effectiveness, we replace the ViT model pretrained by DINO with MocoV3 <cit.> and MAE <cit.> pretraining model. As shown in Table <ref>, the performance is generally better when WCL and alignment loss are employed. Discovery map as pseudo mask for segmentation model: In order to further improve the performance of segmentation, the discovery mask could be employed as the pseudo mask to train a segmentation model <cit.>. Therefore, we train the salient object detection model based on Maskformer <cit.> by using our discovery mask as the pseudo mask. From Table <ref>, we can find that the performance is superior to the SOTA Selfmask <cit.> under most metrics, even though three types of pseudo labels are used in Selfmask. Scalability on large dataset: We apply our method to C120K <cit.> dataset to verify the scalability of our method. Images in C120K are selected from the training and validation sets of the COCO2014 dataset <cit.>, but those containing only the crowd object are deleted. As shown in Table <ref>, both TokenCut <cit.> and our method can scale well on the C120K dataset, while we achieve superior performance. Spectral clustering v.s. PCA: In early works, such as TokenCut <cit.> and DeepSpectral <cit.>, spectral clustering (SC) has shown its advantage for object discovery. Although both SC and PCA are dimension reduction methods, experimental results show that our proposed PCA-based discovery method outperforms the SC-based discovery method. To verify effectiveness of PCA within our setting, we replace it with spectral clustering following the same procedure as it has been used in TokenCut <cit.> and report the performance in Table <ref> as Ours_SC. From the results, although Ours_SC has better values in most metrics, our PCA-based method (Ours_PCA) outperforms the spectral clustering version (Ours_SC) generally. The reason is that our weakly-supervised contrastive learning (WCL) loss and alignment loss enable the network to focus more on the discriminative features of the object. With PCA on top of the WCL, our method can capture the features of the object more completely, while spectral clustering localize mainly on the local regions of the object, representing the discriminative region (see Fig. <ref>). § FAILURE CASES §.§ Object detection The failure of object detection is mainly attributed to the bounding box generation method and inaccurate segmentation results. Fig. <ref> displays different situations where the failure cases happen, in which (a)-(c) are the cases caused by the bounding box generation method and (d) is the failure case generated by the error segmentation map. We can see that, a) the object detection cannot split the connected component region that contains multiple objects, b) the whole region bounding box is redundant if multiple objects are successfully captured by the separated bounding boxes, and c) the separated bounding boxes generated by different connected components are unwanted if the object is occluded by other objects. In addition, as shown in (d), our object discovery method fails to capture the objects that are less salient in the image. §.§ Object segmentation Fig. <ref> illustrates some failure cases of object segmentation. In (a), the contour of the object has a human structure, but we fail to segment it because further details are required. In (b), although the salient parts of the object are highlighted in the discovery map, our method ignores the less salient regions that also belong to the object. If the background also contains some regions with an objectness property, some regions in the background will also be explored (in (c)). Conversely, if the object has less objectness, the object in (d) could not be discovered because it is sometimes a component of the background in other scene-centric images. § CONCLUSION In this paper, we propose an unsupervised object discovery method based on semantic-guided self-supervised representation learning. Our method uses the DINO transformer backbone as the feature encoder, benefiting from the large-scale object-centric pre-training. Then we finetune the DINO backbone on the DUTS training images using our semantic-guided representation learning method to extract the object-centric representation from scene-centric data. Finally, the object region is generated from the image representation by extracting the principal component via PCA. Experiments conducted on object detection, object segmentation and video object segmentation demonstrate the effectiveness of our method. As a self-supervised learning technique, similar to existing techniques, our method still has limitations in dealing with images of complex background. More investigation into robust unsupervised object detection will be conducted in the future. IEEEtran
http://arxiv.org/abs/2307.01514v1
20230704065016
SelfFed: Self-supervised Federated Learning for Data Heterogeneity and Label Scarcity in IoMT
[ "Sunder Ali Khowaja", "Kapal Dev", "Syed Muhammad Anwar", "Marius George Linguraru" ]
cs.LG
[ "cs.LG", "cs.CV" ]
Journal of Class Files, Vol. 14, No. 8, August 2021 Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals SelfFed: Self-supervised Federated Learning for Data Heterogeneity and Label Scarcity in IoMT Sunder Ali Khowaja, Senior Member, IEEE, Kapal Dev, Senior Member, IEEE, Syed Muhammad Anwar, Senior Member, IEEE, Marius George Linguraru,  Senior Member, IEEE Sunder Ali Khowaja is with Faculty of Engineering and Technology, University of Sindh, Jamshoro, Pakistan, (Email: [email protected]). Kapal Dev is with Department of Computer Science, Munster Technological University, Ireland, (E-mail: [email protected]). Syed Muhammad Anwar and Marius George Linguraru are with Sheikh Zayed Institute for Pediatric Surgical Innovation, Childrens National Hospital, and with Department of Radiology and Pediatrics, George Washington University School of Medicine and Health Sciences, Washington DC, (E-mail: [email protected] and [email protected]). ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= Self-supervised learning in federated learning paradigm has been gaining a lot of interest both in industry and research due to the collaborative learning capability on unlabeled yet isolated data. However, self-supervised based federated learning strategies suffer from performance degradation due to label scarcity and diverse data distributions, i.e., data heterogeneity. In this paper, we propose the SelfFed framework for Internet of Medical Things (IoMT). Our proposed SelfFed framework works in two phases. The first phase is the pre-training paradigm that performs augmentive modeling using Swin Transformer based encoder in a decentralized manner. The first phase of SelfFed framework helps to overcome the data heterogeneity issue. The second phase is the fine-tuning paradigm that introduces contrastive network and a novel aggregation strategy that is trained on limited labeled data for a target task in a decentralized manner. This fine-tuning stage overcomes the label scarcity problem. We perform our experimental analysis on publicly available medical imaging datasets and show that our proposed SelfFed framework performs better when compared to existing baselines concerning non-independent and identically distributed (IID) data and label scarcity. Our method achieves a maximum improvement of 8.8% and 4.1% on Retina and COVID-FL datasets on non-IID dataset. Further, our proposed method outperforms existing baselines even when trained on a few (10%) labeled instances. Federated Learning, Self-Supervised Learning, Contrastive Network, Label Scarcity, Data Heterogeneity. § INTRODUCTION Internet of Medical Things (IoMT) has been an active and challenging area of research for over a decade to the researchers associated with machine learning. IoMT comprises of various tasks including segmentation, classification, and disease detection <cit.>. Standard machine learning algorithms, when applied to analyze data generated by IoMT, are successful in improving the performance of aforementioned tasks. However, these methods face challenges that cannot be tackled when only traditional methods are used. Among these, one of the challenges is data privacy, as the patients data was shared to train a machine learning algorithm that not only risks data leakage but is also vulnerable to data manipulation, resulting in wrong diagnosis. Another challenge corresponds to the learning of IoMT data in a distributed manner. With heterogeneous sensing devices and wearable sensors, the IoMT data is rarely stored to a centralized server, therefore, data from different servers placed at different hospitals need to be trained in a distributed manner. Further, in real-scenarios, it is highly probable that the data from different servers and hospitals follow different distribution <cit.>. In this regard, a decentralized learning paradigm is needed to train the data on client devices in order to ensure privacy and tackling heterogeneity in data distribution. The data privacy and varying data distribution issues have been addressed by a seminal work <cit.>, which proposed using federated learning (FL) paradigm. In FL, each client would train individual models locally at the client side, followed by an aggregation of client models at the server side to construct a global model. The copies of the global model are then sent back to the client for improved inference. In recent years, FL has seen drastic growth due to its characteristics that breaks regulatory restrictions, reduces data migration costs, and strengthens data privacy. The FL technique is characterized as distributed machine learning technique and has been widely adopted for applications in the domain of data mining, natural language processing, and computer vision. Although, FL implementations can deal with the data heterogeneity, which is one of the challenges that resists the realization of wider adoption of FL methods. Data heterogeneity issue gets more challenging due to the generation of diversified datasets with varying distributions at client side (due to heterogeneous sensing devices) and behavioral preferences that result in skewed distribution of labels (some sensors have could use heart rate, while others use heart rate variability or blood volume pulse in their daily life). Another challenge associated with FL approach is label scarcity and has been explored relatively lesser in the available literature. This issue could get severe in edge device setting, as the users do not have a user-friendly interface to annotate their data on their IoT or smartphone device. Further, in some cases, users are resistant to annotate the sensitive and private data due to privacy concerns. This is one of the probable reasons as to why large-scale labeled IoMT datasets are scarce in availability. Researhers have worked extensively on the mitigation of data heterogeneity issue by proposing the training of a global model such as FedOPT, FedNOVA, FedProx, and FedAvg. Researchers have also proposed distributed and personalized FL frameworks to deal with the issue of data heterogeneity in the form of DBFL, Per-FedAvg, Ditto, and pFedMe <cit.>. However, the aforementioned methods work under the assumption that sufficient labels are available at the edge, which is not true in real-life scenarios, especially in the case of IoMT. An Example of label scarcity and data heterogeneity in IoMT are shown in Figure 1. For instance, some hospitals have more data concerning the severe conditions of the patients while others have more data related to the normal conditions of the patients. This phenomenon is referred to as label distribution skew or data partitions with non-identical distribution (non-IID). Another problem highlighted in Figure 1 is related to quantity skew as some hospitals have large amount of data while others have smaller number of images. Similarly, each of the client hospital might use different patient population, acquisition protocols, and devices, which results in feature distribution skew. Label Scarcity is also one of the challenges represented in Figure 1 as some hospitals have large number of annotated data while others might lack in expertise for annotating the data or small proportion of labeled data, respectively. Recently, researchers proposed to tackle the label scarcity issue in FL through semi-supervised methods by assuming that either client or server possess few labeled instances <cit.>. The semi-supervised methods are based on pseudo-labeling and consistency loss for training a global model. Researchers also tried using unsupervised learning using Siamese network but the method was not able to handle data heterogeneity issue quite well. Furthermore, unsupervised learning in FL was also limited in terms of scalability concerning IoMT devices. Researchers have starting exploring the self-supervised learning strategies such as SimSiam <cit.>, BYOL <cit.>, SwAV <cit.>, and SimCLR <cit.> to tackle the problem of label scarcity but the problem of data heterogeneity still remains at large. Extensive research efforts have been carried out to solve skewed data distribution problem that is caused due to the data heterogeneity. A recent study suggested that the family of vision transformers <cit.> can reduce the issue of data heterogeneity to a certain extent in comparison to traditional convolutional neural networks (CNNs). The study in <cit.> showed that adopting vision transformers can improve the overall performance in FL methods. Although the statement holds true on general computer vision applications that rely on supervised pre-training from ImageNet but the same cannot be adopted to IoMT data due to domain discrepancy. Therefore, the method that can handle data heterogeneity and label scarcity by leveraging small-scale labeled instances at client side and large-scale unlabeled data at the server side to learn global model in collaborative manner is needed. In this paper, we address the label scarcity and data heterogeneity issue by proposing self-supervised federated learning (SelfFed) using online learning principles. The framework employs two phases, i.e. SelfFed pre-training phase and SelfFed Fine-tuning phase. Studies have proved that pre-training pertaining to self-supervised learning is an effective strategy to deal with inductive bias and skewness in label distribution. The visual features learnt from unlabeled data on the client side are leveraged to update the online network encoder at the server side. A contrastive network using consistency loss is trained on the server side, which shares the encoder weights and model parameters with the client side for stage 2 proceedings. The client side model in stage 2 is connected with a linear classifier, which is then trained in a supervised manner. The encoder parameters updated via cross-entropy loss are then shared with online network encoder for further aggregation. Unlike the existing works we use a target network and projection heads to extract high-quality representations. The contributions of this work are summarized as follows: * We propose a novel framework SelfFed: which simultaneously deals with both data heterogeneity and label scarcity in IoMT data. * We further propose a novel contrastive network and aggregation mechanism to perform fine-tuning in self-supervised FL paradigm. * We evaluate the proposed method on various datasets and show that the SelfFed performs better than existing FL methods along with supervised baselines pre-trained on ImageNet using non-IID data. § RELATED WORKS This section consolidates the review of articles concerning traditional self-supervised learning techniques that include contrastive learning, followed by the review of techniques based on FL paradigm. Lastly, we review some works that combine FL and self-supervised learning for a particular application. §.§ Contrastive Learning In recent years, self-supervised learning has gained a lot of attention due to its capability of utilizing pretext task information for extracting better features and representations that can be leveraged as supervised information <cit.>. Therefore, it is important to come up with a relevant pretext task that might vary with respect to different domains including graph, computer vision, natural language processing, and so forth. Generally, self-supervised techniques are categorized into contrastive-based, temporal-based, and context-based methods. The temporal-based methods model the temporal characteristics of the data such as next sentence prediction in natural language processing through BERT, or next action to be carried out in a video while computing the frame similarity <cit.>. The sequential property in temporal-based self-supervised learning is also referred to as order-based constraint. Another technique utilizes contextual information such as frequency of words or similar words that define the same thing through Word2Vec for the task construction. This technique is termed as context-based self-supervised learning. Many researchers have used contextual information such as image coloring, rotation prediction, jigsaw, and other representations, for the tasks in computer vision field. However, contrastive learning is the most popular technique to be used with self-supervised learning. Contrastive learning takes into account the feature space to distignuish between the data rather than considering the monotonous details of instances. This helps the contrastive learning to generate a model that is more generalizable and simpler, respectively. Contrastive learning uses InfoNCE loss <cit.> to learn discriminating features among constrasting instances while identifying supportive features for identical instances. The simplicity and generalizable of the model can be achieved as the contrastive learning is able to distiniguish or identify similarity in instances at abstract semantic level, i.e. feature spaces. One of the ways to improve the performance of contrastive learning is to expand upon the number of negative samples. One of the seminal studies that employ siamese network for maintaining a queue of negative samples through momentum encoder is MoCo <cit.>. On the other end of the spectrum, SimCLR <cit.> technique undertakes negative samples available in the current batch. Another approach has been proposed, i.e. BYOL <cit.> that does not require negative samples, rather it uses augmented view of the available images to train its online networks, while maintaining a decent performance. Out of the aforementioned three, BYOL represents the closest match to the real-world situation. Recently, vision transformers (ViTs) have been adopted for federated learning <cit.> such as MAE <cit.>, BEiT <cit.>, and Swin Transformers <cit.> to overcome the data heterogeneity issues. Furthermore, the ViTs are also capable of learning representations from corrupted images using signal reconstruction. We refer these methods as augmentive modeling. Where constrastive learning relies on the data augmentations or large sample size, the ViTs achieve the same performance level due to their instrinsic characteristics. §.§ Federated Learning Federated learning was first introduced by Google <cit.> and has been popular due to its privacy preservation characteristics. FL paradigm eliminates the requirement of data sharing from client side while allowing models to train in a collaborative manner. Generally, the FL approaches train the model locally at the client side and maintain the aggregation of a global model at the server side. The aggregation and optimization of global model is carried out in an iterative manner through communication rounds between the server and the client, accordingly. The flow of the operations in FL is as follows: * Global model from the server is sent to the clients for collaborative training. * Client update the global model locally with their local training data and send the updated model to the server. * Model updates from clients are aggregated at the server for the updation. Once the model is updated, it can be used for various tasks. There have been many algorithms in FL paradigm to aggregate the model updates, however the most popular of them is federated averaging algorithm (FedAvg) <cit.>. Local parameters are updated through multiple gradient descent updates, which increases the computation at the local node but decreases the amount of communication. Another similar approach for aggregating model updates is FedSGD <cit.>. The main problem with such methods is the convergence that require large number of iterations, along with the heterogeneity of data and devices involved. Various methods have been proposed over the years to address non-IID data issue. These approaches can be generally categorized as methods focusing on aggregation and methods focusing on stability. The former strives for improving the efficiency of model aggregation process such as FedNova <cit.>, while the latter pivots for stability of training process at the local nodes. The methods concerning stabilization include FedAMP <cit.>, and FedProx <cit.>. The problem with the existing FL approaches is the assumption that the true labels for all the instances are available on devices. However, labeled data is a limited resource and require expertise when performed labeling. Even in some domains such as IoMT, labeling is synonymous to high costs. Another problem is associated with the heterogeneity of data from model initialization perspective <cit.>. Some studies suggest that the problem of data heterogeneity can be alleviated by perform pre-training, however the use of ViTs have been considered to a more viable solution to address data heterogeneity issue <cit.>. Nevertheless, it is imperative to design FL-based methods that deal with both issues, i.e. data heterogeneity and label scarcity, respectively. §.§ Federated Learning based on Contrastive Networks Contrastive networks employed in FL paradigm have shown promising results in alleviating non-IID problems <cit.>. The use of contrastive networks in FL is generally performed in two phases. The first phase extracts visual representations from distributed devices by performing collaborative pre-training on unlabeled data. This phase forwards the shared features and perform aggregation on model updates similar to conventional FL approaches. At the client side, local unlabeled data is utilized to perform comparative learning. The second phase performs fine-tuning using limited number of labels and pre-trained mode l from the previous phase. The second phase process is carried out via federated supervised learning on each device, respectively. The works employing contrastive networks in FL paradigm undertake skewness concerning label distributions only. For instance, number of images are same with each client and the number of data instances are usually considered to be the more than 10K. Such assumption is not always true in real-world settings, therefore, it is yet to be observed how such methods perform when either of the aforementioned assumption does not hold. In this work, we propose the use of Swin transformer along with MAE <cit.> to perform augmentive modeling as self-supervised task. We assume that the use of Swin transformer and MAE will help the contrastive network in FL converge faster with less number of data instances. § METHODOLOGY §.§ Problem Definition The main motivation of this work is to design a self-supervised federated learning (SelfFed) approach that does not require data sharing while learning in decentralized manner. The SelfFed is designed in a way such that it should perform well even on non-IID data from clients with limited number of available labels. The number of clients in this work are denoted by M. A local dataset 𝒟^m is associated with each client such that m ∈1,...,M. A global model need to be learnt in a generalized manner over all local datasets such that 𝒟 = ⋃_m=1^M 𝒟^m. The empirical loss concerning clients over data distribution 𝒟^m is defined in equation 1. loss_m(𝐰) = 𝔼_x ∼𝒟^m[𝕃_m(𝐰;x)] In the above equation, 𝐰 represent the parameters to be learned concerning global model, while the notation 𝕃_m refers to the loss function concerning each client m. This work focuses on dealing with data heterogeneity problem such that the data can be non-IID, i.e. 𝒟^a ≠𝒟^b and might follow non-identical data distribution, PD_a(x,y) and PD_b(x,y). This work also undertakes the problem of label scarcity at local client level. Therefore, we define the unlabeled and labeled dataset as 𝒟^m_ul = x and 𝒟^m_lb = (x,y), respectively. We also assume that |𝒟^m_lb| is comparatively very small. §.§ SelfFed Framework The workflow of the proposed SelfFed framework is shown in Figure 2. In the subsequent sections, we show that the proposed SelfFed framework is able to handle data heterogeneity as well as label scarcity problem. As shown in Figure 2, the proposed framework is divided into two stages: pre-training and fine-tuning stage concerning self-supervised federated learning. In pre-training stage, the augmentive modeling is exploited in order to learn representative knowledge in a distributed manner. In latter stage, federated models are fine-tuned by transferrence of representative knowledge from the previous stage to perform target task, respectively. For the augmentive modeling, we leverage MAE <cit.> method into our SelfFed framework. The reason for choosing MAE over BEiT is performance advantage as highlighted in <cit.>. The MAE integrated SelfFed framework is denoted by SelfFed-MAE, accordingly. We provide details for the pre-training and fine-tuning of SelfFed-MAE in the subsequent subsections. §.§ SelfFed Pre-training Each client consists of a local autoencoder and a decoder represented as Enc_m and Dec_m, during the pre-training phase, respectively. Augmentive modeling is used to train the models, which undertakes patches from subset of an image and learns to reconstruct the marked patches, accordingly. MAE <cit.> is leveraged as our self-supervision module, specifically for performing the augmentive modeling. For our IoMT usecase, we consider medical images to validate the proposed SelfFed framework. An input image x ∈ℝ^H × W × C belong to a certain data distribution 𝒟^m is segmented into image patches such that x_r = {x_r^j}_j=1^R∈ℝ^R × (V^2.C), where C refer to the number of channels. The dimensions of the original and image patch are represented by (H,W) and (V,V), respectively. The number of patches are determined using R = HW/V^2. §.§.§ Augmentative Masks Masking ratio for generating augmentation is denoted as ψ, the unmasked positions are denoted as ς while the masked positions are denoted as ϑ. When ψ% of patches are masked in a random manner, we get ς + ϑ = R and ϑ = ψ R. The representation of overall image patches can be given as shown in equation 2. x_r = { x_r^j, j ∈ς}∪{x_r^j, j ∈ϑ} where the first and second term in the aforementioned equation characterize unmasked and masked visible patches, respectively. The adopted method MAE <cit.> considers random masking. §.§.§ Encoder We employ Swin Transformer <cit.> as the representational encoder in SelfFed framework, which undertakes the image patch sequences as illustrated in Figure 2. In conjunction to the Swin Transformer encoder, the MAE considers the linear projections with the position embeddings as an input extracted from the patches. The projection is denoted as x_r^ςEnc. The output is obtained in the form of encoded visible patch, which is represented as {o_j,j ∈ς}. §.§.§ Decoder Based on the input patches and their encoded representations, the decoder task is to reconstruct the signal, effectively. The visible patches are provided as an input to the MAE in addition to the position embeddings and masked patches, i.e. γ_r^ϑ = {γ_r^j, j ∈ϑ}, which acts as a learnable vector for the decoder. §.§.§ Loss function The local objective function through which Enc_m and Dec_m are trained on local dataset 𝒟^m is given in equation 1. Concerning MAE, the mean square error for the masked patches such that {x_r^j, j ∈ϑ}∈ℝ^V × V × C is represented by 𝕃_m and can be defined as shown in equation 3. 𝕃_m = ∑_j ∈ϑ1/ϑ ((x_r^j - x̂_r^j)^2;𝐰) §.§ SelfFed Fine-Tuning On the server side, we have two networks namely online and target networks that consists of an encoder and a projection head. The input image to server side is represented with two views represented as ι_+ and ι_++. The online network takes into account the former view while the target networks considers the later view of the image. These views undergo the encoder to extract the higher dimensional representation. Existing work such as MoCo-V2 <cit.> has validated the effectiveness of projection heads in FL paradigm. The projection heads outputs the representations in lower dimension. We used the same configuration of projection head as in existing studies, i.e. two linear layers with rectified linear unit (ReLU) activation. It is shown through Figure 2, that the local encoders share the weights with online network only. Although the structure for both the target and online networks are same the parameters of the target network needs to be updated through exponential moving average via online network. The formulation is shown in equation 4. ϱ_i = θ·ϱ_i-1 + (1 - θ) ·φ_i The notation θ refers to the target decay which ranges within the bounds of [0,1]. Existing studies have suggested to use larger values of θ, i.e. around 0.9. The parameters of online network and target network are represented by φ and ϱ, respectively. The updates in the parameters of online network are slow and steady till it reaches its optimal value. The similarity between two views of the sample is calculated using InfoNCE <cit.> as shown in equation 5. Sim = -log exp(q_+ * q_++/μ)/∑_0^N exp(q_+ * q_-/μ) where * refer to the cosine similarity between the views and μ represent the hyperparameter. A memory bank is used to store vectors of N negative samples. The notation q_+ and q_++ are the predictions such that q_+ = FC(Enc(ι_+)), where FC is the fully connected layer. We consider q_+ and q_++ as a pair of positive examples and q_- as a negative example. The images corresponding negative examples are fed into both the encoder and the projection head to extract features followed by their storage in the memory bank. InfoNCE is then used to maintain the consistency between representations of the same images and the different ones. Based on the feature vectors a memory queue is generated that act as the sequence of local negatives in a FIFO mechanism so that the contrast loss with each input sample is computed. The memory queue stores new features and discards the older ones in a recursive manner. §.§ Aggregation method and Local Network At each communication round in phase 2, the online network at the server side transmits the encoder to the clients. At the client side in phase 2, a linear classifier is added, specifically a multilayer perceptron (MLP) is added for the classification task. The ground truth labels from the limited labeled dataset are used to compute the cross-entropy loss, that is widely used for computer vision tasks. For each client, the objective function is minimized and weights are adjusted. Each client then sends the encoder part to the server for model aggregation. The server aggregates the encoder with the FC layers, which is then considered the updated online network. The updated network is then used for subsequent round of training. The pre-training part in phase 1 mainly deals with the data heterogeneity. However, for mitigating the label scarcity problem, we propose that the server should take into account the size of the dataset of the client and the frequency of client selection. The assumption is that the advantage of data associated with the client chosen several times dwindle, which limits the improvement concerning self-supervised learning. We proposes an aggregation mechanism that maps the degradation relationship in order to improve the self-supervised learning process. Each client performs an update with respect to the formulation shown in equation 6. ϖ←ϖ - η∇𝕃_CE where ϖ represents the model in client subset, currently, 𝕃_CE represents the cross-entropy loss, and η refers to the learning rate. At the client side, individual gradients are computed to update the model. Considering that all clients send their updates to the server, the aggregation using the proposed method can be carried out as shown in equation 7. ϖ_i+1←∑_t=1^Tn_t/nϖ_t+1β^F_t The term F_t represents the frequency of a particular client that has participated in the aggregation process. The parameter β introduces gain-recession reaction suggesting that the higher the frequency of client's participation, lower the impact in aggregation process. § EXPERIMENTAL SETUP AND RESULTS This section presents experimental analysis and results on IoMT, specifically medical image datasets to validate the efficacy of the SelfFed framework. First, the details regarding the datasets are laid out followed by the experimental setup for the SelfFed framework. We present analysis to show the robustness of the proposed work for data heterogeneity and label scarcity issues and provide a comparative analysis with existing works, accordingly. §.§ Datasets We validate the performance of SelfFed method on two IoMT related tasks. The first is the classification of COVID-19 and pneuomonia from chest X-rays, and the second is to use retinal fundus images for diabetic retinopathy detection. Both of the tasks differ in terms of label distributions, image acquisition, and image modalities, respectively. For instance, chest X-rays are acquired using X-ray scanners while the fundus images are captured using specialized camera. §.§.§ COVID-19 Dataset For the COVID-19 dataset, we adopt COVID-FL <cit.> that represents federated data partitions in real-world settings. The COVID-FL comprises over 20k chest X-ray images curated from eight data repositories that are available in public domain. More details on each of the repository is provided in <cit.>. Each of the eight repositories are represented as a data site to mimic a medical institution in FL paradigm. It also complies with data heterogeneity and label scarcity issues as classes might vary in the dataset corresponding to specific sites. Furthermore, the acquisition machines that obtained X-ray images were different, along with the patient populations. Therefore, the dataset naturally emulates the data heterogeneity and label scarcity issues. We follow the same protocol as in <cit.>, such that the split ratio for training and testing is set to 80% and 20%, which yields around 16k training and 4k testing images. The proportion of the data at each site is homogeneous. Each client (i.e. hospital) is considered to be a test set that can be held out for evaluation. §.§.§ Retina Dataset We also undertake Diabetic Retinopathy Competition dataset[https://www.kaggle.com/c/diabetic-retinopathy-detection] from Kaggle. The dataset comprises of over 35k fundus images obtained using multiple specialized cameras. The original dataset divides the images into 5 categories, i.e. proliferating, severe, moderate, mild, and normal, however, in this study we have binarized the labels such as diseased and normal, respectively. 9k balances images are randomly selected from the dataset for training set while 3k images were selected for the testing set, accordingly. §.§ Experimental Setup §.§.§ Preparation of Non-IID Dataset For COVID-FL dataset, they already manifest skewness in feature and label distribution as in real-world setting. It has been observed in COVID-FL dataset that some clients contain large volume of data, which is significantly higher than the other clients. In this regard, such clients are partitioned further into sub-clients without coinciding. After such partitioning, the total number of clients stand at 12 for COVID-FL dataset. For Retina dataset, we consider Dirichlet distribution as suggested in <cit.> to model non-IID and IID characteristics. It has also been suggested that simulated partitions in the dataset provides with greater freedom for investigation as its easy to manipulate them and varying degrees of testing can be performed concerning heterogeneity of the data. A dataset with Ω classes and M local clients can be randomly partitioned through simulating as shown in equation 8. ρ_i = {ρ_i,1,...,ρ_i,M}∼ Dir_M(δ) where the L_1 norm of ||ρ_i|| = 1. A proportion of i-th class instances are assigned to N-th client. The degree of heterogeneity is controlled by the parameter δ in equation 8, suggesting that higher values of δ lead to lower degree of heterogeneity. We consider three values of δ that simulate the level of data heterogeneity from IID to moderate non-IID and severe non-IID, respectively. The values of δ are selected to be 100, 1.0, and 0.5, for Retina dataset with 5 number of clients, accordingly. The dataset splitted with δ = 100 is referred to as IID (Split1), while the split with values 1.0 and 0.5, referred to as moderate non-IID (Split2) and severe non-IID (Split3). §.§.§ Data Augmentation The data augmentation parameters vary slightly in pre-training and fine-tuning phases. Random horizontal flipping, crop patches of size 224x224 along with random scaling is performed in both phases. The random color jittering is performed during pre-training while random rotation (10 degrees) is performed during fine-tuning. For Retina and COVID-FL datasets in pre-training phase, the random scaling factor is selected from the values [0.2, 1.0] and [0.4, 1.0], respectively. Similarly, the random scaling factor for both the datasets in fine-tuning phase are selected from the values [0.8, 1.2] and [0.6, 1.0], respectively. §.§.§ Pre-training and Fine-tuning setup in SelfFed The simulations for this study have been carried out using DistributedDataParallel (DDP) module and PyTorch framework. The backbone for the proposed method is chosen to be Swin Transformer <cit.>. For the MAE, 16x16 patches were selected. Augmentive masking was performed for 60% of the image patches in random manner. AdamW is used as an optimizer in the proposed method with its default values. Same hyperparameters are used for both federated and centralized learning. For Retina dataset, the batch size and learning rate were set to 128 and 1e^3, while for the COVID-FL dataset, the parameters were set to 64 and 3.75e^-4. The cosine decay for SelfFed-MAE was set to 0.05. The SelfFed-MAE uses warmup period of five epochs to run along with 1.6k communication rounds. We train the model with a batch size of 256 and learning rate of 3e^-3 for Retina dataset, and batch size of 64 with the same learning rate for COVID-FL dataset. The fine-tuning is performed for 100 communication rounds, respectively. We validate our approach using accuracy as an evaluation metric. The choice of evaluation metric is in compliant with existing studies. §.§ Results for Data Heterogeneity To validate the efficacy of the SelfFed-MAE approach, we compare our results with some baseline methods that include Swin Transformer (from scratch), ImageNet pre-training with Swin Transformer, ImageNet pre-training with ViT, ImageNet pre-training with BEiT, and ImageNet pre-training with MAE. For both the datasets, the baseline approaches are pre-trained on ImageNet22K <cit.> in a centralized manner, while the Swin Transformer (from scratch) is trained in a decentralized manner. Additionally, for COVID-FL, we compare our approach with pre-trained Swin Transformer, ViT, BEiT, and MAE, on CXR14 dataset <cit.> which comprises of around 112K images. We run the fine-tuning process for 1K communication rounds and 100 communication rounds for the networks trained from scratch and for pre-trained networks, respectively. The results are reported in Figure 3, 4, and 5. The results show that the SelfFed-MAE is consistent in terms of result on both the dataset concerning data heterogeneity problem. Methods such as MAE and BEiT pre-trained on ImageNet show a large discrepancy (drop) in accuracy. Even when the heterogeneity level of the data is severe the proposed method yields not only the best accuracy but also a consistent hold over the performance degradation. We observe maximum accuracy improvement on Retina and COVID-FL dataset, i.e. 8.8% and 4.1%, using SelfFed-MAE over pre-trained networks, respectively. §.§ Results for Label Scarcity In order to prove that the SelfFed framework can provide improved performance even with limited number of labels, we conduct the following experiment on Retina Dataset. We use 10%, 30%, and 70% of labeled samples at the SelfFed fine-tuning stage, which results in 1K, 3K, and 6K labeled images, respectively. The results for this experiment are reported in Figure 6. Even with 10% of the labeled samples, our method achieves better performance in comparison to other baselines while using 100% of the labeled data in both IID and non-IID settings. The results for training Swin Transformer from scratch is compliant with the existing works as it does not perform well under less number of labeled samples. § CONCLUSION This paper addresses an understudied scenario in FL paradigm where both the data heterogeneity and label scarcity issues are handled simultaneously. We proposed SelfFed framework that used augmentive modeling by leveraging MAE method to deal with data heterogeneity. We also propose a novel constrastive network that handles two views of an instance and propose a novel aggregation strategy that reduces the impact of a client who participated various times in the fine-tuning process. The fine-tuning stage comprising of our contrastive network and novel aggregation strategy overcomes the label scarcity issue. We illustrate with our experimental results that the proposed SelfFed is efficient when using non-IID data and performs better than existing baselines such as Swin Transformer pre-trained on ImageNet, ViT pre-trained on ImageNet, MAE pre-trained on ImageNet, and BEiT pre-trained on ImageNet. Our results also indicate that the SelfFed framework is successful in handling label scarcity, while outperforming existing baselines to the best of our knowledge. 1 IEEEtran ref1 Khowaja, S.A., Prabono, A.G., Setiawan, F., Yahya, B.N. and Lee, S.L., 2018. Contextual activity based Healthcare Internet of Things, Services, and People (HIoTSP): An architectural framework for healthcare monitoring using wearable sensors. Computer Networks, 145, pp.190-206. ref2 C. He, M. Annavaram, and S. Avestimehr, “Group knowledge transfer: Federated learning of large cnns at the edge,” NeurIPS, vol. 33, 2020. ref3 B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-efficient learning of deep networks from decentralized data,” in Artificial intelligence and statistics, 2017, pp. 1273–1282 ref6 Sashank Reddi, Zachary Charles, Manzil Zaheer, Zachary Garrett, Keith Rush, Jakub Konecnˇ y, Sanjiv Kumar, and H Brendan McMahan. Adaptive federated optimization. arXiv preprint arXiv:2003.00295, 2020.2. ref7 Wang J, Liu Q, Liang H, Joshi G, Poor HV. Tackling the objective inconsistency problem in heterogeneous federated optimization. Advances in neural information processing systems. 2020;33:7611-23. ref8 Li T, Sahu AK, Zaheer M, Sanjabi M, Talwalkar A, Smith V. Federated optimization in heterogeneous networks. Proceedings of Machine learning and systems. 2020 Mar 15;2:429-50. ref10 Khowaja SA, Dev K, Khowaja P, Bellavista P. Toward energy-efficient distributed federated learning for 6G networks. IEEE Wireless Communications. 2021 Dec;28(6):34-40. ref11 Itahara S, Nishio T, Koda Y, Morikura M, Yamamoto K. Distillation-based semi-supervised federated learning for communication-efficient collaborative training with non-iid private data. IEEE Transactions on Mobile Computing. 2021 Mar 31;22(1):191-205. ref13 Chen X, He K. Exploring simple siamese representation learning. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition 2021 (pp. 15750-15758). ref14 Grill JB, Strub F, Altché F, Tallec C, Richemond P, Buchatskaya E, Doersch C, Avila Pires B, Guo Z, Gheshlaghi Azar M, Piot B. Bootstrap your own latent-a new approach to self-supervised learning. Advances in neural information processing systems. 2020;33:21271-84. ref15 Caron M, Misra I, Mairal J, Goyal P, Bojanowski P, Joulin A. Unsupervised learning of visual features by contrasting cluster assignments. Advances in neural information processing systems. 2020;33:9912-24. ref16 Chen T, Kornblith S, Norouzi M, Hinton G. A simple framework for contrastive learning of visual representations. InInternational conference on machine learning 2020 Nov 21 (pp. 1597-1607). PMLR. ref17 Qu L, Zhou Y, Liang PP, Xia Y, Wang F, Adeli E, Fei-Fei L, Rubin D. Rethinking architecture design for tackling data heterogeneity in federated learning. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2022 (pp. 10061-10071). ref18 A. S. Hervella, J. Rouco, J. Novo, M. Ortega, Learning the retinal anatomy from scarce annotated data using self-supervised multimodal reconstruction, Applied Soft Computing 91 (2020) 106210. ref19 Sermanet P, Lynch C, Chebotar Y, Hsu J, Jang E, Schaal S, Levine S, Brain G. Time-contrastive networks: Self-supervised learning from video. In2018 IEEE international conference on robotics and automation (ICRA) 2018 May 21 (pp. 1134-1141). IEEE. ref20 Oord AV, Li Y, Vinyals O. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748. 2018. ref21 He K, Fan H, Wu Y, Xie S, Girshick R. Momentum contrast for unsupervised visual representation learning. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition 2020 (pp. 9729-9738). ref22 He K, Chen X, Xie S, Li Y, Dollár P, Girshick R. Masked autoencoders are scalable vision learners. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2022 (pp. 16000-16009). ref23 Bao H, Dong L, Piao S, Wei F. Beit: Bert pre-training of image transformers. arXiv preprint arXiv:2106.08254. 2021. ref24 Liu Z, Lin Y, Cao Y, Hu H, Wei Y, Zhang Z, Lin S, Guo B. Swin transformer: Hierarchical vision transformer using shifted windows. InProceedings of the IEEE/CVF international conference on computer vision 2021 (pp. 10012-10022). ref25 Huang Y, Chu L, Zhou Z, Wang L, Liu J, Pei J, Zhang Y. Personalized cross-silo federated learning on non-iid data. InProceedings of the AAAI Conference on Artificial Intelligence 2021 May 18 (Vol. 35, No. 9, pp. 7865-7873). ref26 Chen HY, Tu CH, Li Z, Shen HW, Chao WL. On pre-training for federated learning. arXiv preprint arXiv:2206.11488. 2022 Jun 23. ref27 Wang P, Han K, Wei XS, Zhang L, Wang L. Contrastive learning based hybrid networks for long-tailed image classification. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition 2021 (pp. 943-952). ref28 Yan R, Qu L, Wei Q, Huang SC, Shen L, Rubin D, Xing L, Zhou Y. Label-efficient self-supervised federated learning for tackling data heterogeneity in medical imaging. IEEE Transactions on Medical Imaging. 2023 Jan 2. ref29 Chen X, Fan H, Girshick R, He K. Improved baselines with momentum contrastive learning. arXiv preprint arXiv:2003.04297. 2020. ref30 Deng J, Dong W, Socher R, Li LJ, Li K, Fei-Fei L. Imagenet: A large-scale hierarchical image database. In2009 IEEE conference on computer vision and pattern recognition 2009 Jun 20 (pp. 248-255) ref31 Wang X, Peng Y, Lu L, Lu Z, Bagheri M, Summers RM. Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. InProceedings of the IEEE conference on computer vision and pattern recognition 2017 (pp. 2097-2106).
http://arxiv.org/abs/2307.02349v2
20230705150834
Error Approximation and Bias Correction in Dynamic Problems using a Recurrent Neural Network/Finite Element Hybrid Model
[ "Moritz von Tresckow", "Herbert De Gersem", "Dimitrios Loukrezis" ]
cs.CE
[ "cs.CE" ]
aff1]Moritz von Tresckowcor1 aff1]Herbert De Gersem aff1]Dimitrios Loukrezis [aff1] organization=Technische Universität Darmstadt, Institute for Accelerator Science and Electromagnetic Fields (TEMF), addressline=Schlossgartenstr. 8, city=64289 Darmstadt, country=Germany [cor1]Corresponding author: [email protected] This work proposes a hybrid modeling framework based on recurrent neural networks (RNNs) and the finite element (FE) method to approximate model discrepancies in time dependent, multi-fidelity problems, and use the trained hybrid models to perform bias correction of the low-fidelity models. The hybrid model uses FE basis functions as a spatial basis and RNNs for the approximation of the time dependencies of the FE basis' degrees of freedom. The training data sets consist of sparse, non-uniformly sampled snapshots of the discrepancy function, pre-computed from trajectory data of low- and high-fidelity dynamic FE models. To account for data sparsity and prevent overfitting, data upsampling and local weighting factors are employed, to instigate a trade-off between physically conforming model behavior and neural network regression. The proposed hybrid modeling methodology is showcased in three highly non-trivial engineering test-cases, all featuring transient FE models, namely, heat diffusion out of a heat sink, eddy-currents in a quadrupole magnet, and sound wave propagation in a cavity. The results show that the proposed hybrid model is capable of approximating model discrepancies to a high degree of accuracy and accordingly correct low-fidelity models. Bias Correction Dynamic Problems Finite Elements Hybrid Modeling Model Error Approximation Multi-Fidelity Modeling Recurrent Neural Networks § INTRODUCTION Over the last decade, ml has established itself as one of the prime research subjects in the contemporary sciences, finding applications in biology <cit.>, engineering <cit.>, physics <cit.>, and medicine <cit.>, to name only a few areas. Even though this development can to some extent be attributed to hype <cit.>, there are domains in which ml methods have outperformed state-of-the-art techniques. Explicit examples include the development of cnn and graph neural networks for image classification <cit.>, large language models for natural language processing <cit.>, and deep reinforcement learning for applications in autonomous driving <cit.>. However, ml models are subjected to the curse of dimensionality and training often involves solving nonlinear optimization problems which causes training times to be a significant concern. Nevertheless, with processing units such as CPUs and GPUs becoming more performant each year <cit.>, the impact of ml will most likely increase even further. On the basis of these recent successes, ml quickly penetrated traditional scientific computing methods <cit.>. Recent advances in this area lie in the emergence of pinn <cit.>, hybrid modeling <cit.>, but also in the field of data-driven computing <cit.>. pinn incorporate physics-based loss functions to train nn to exhibit physically conforming behavior, hybrid modeling approaches seek to combine "the best of both worlds", and data-driven computing methods extend traditional numerical approximation methods for pde to directly incorporate measurement data. Worthy of note, however, is that ml does not always improve upon established methods and its success or lack thereof significantly depends on the problem setting to which it is applied <cit.>. For instance, deep learning inherently requires nonlinear optimization and is at a natural disadvantage when competing with traditional solvers in low dimensional, linear problem settings, consequently requiring nonlinear and/or high-dimensional problems problem settings to achieve a computational benefit <cit.>. In the authors' opinion, data-based bias correction and model validation, a framework discussed in detail in the recent work of Levine and Stuart <cit.>, provides exactly such a setting. In particular, let us consider the case of multi-fidelity modeling, where computationally inexpensive, low-fidelity models are employed to approximate time- and resource-demanding high-fidelity models, resulting in a model hierarchy according to accuracy and computational cost <cit.>. Bias correction and model validation occur naturally in multi-fidelity modeling. In bias correction, a corrective term is computed, which accounts for the discrepancy between models of different fidelity <cit.>. In model validation, the accuracy of the low-fidelity model is validated against a reference <cit.>. In both cases, approximating the systematic error, also referred to as model bias or discrepancy, is necessary. Primary approaches for learning a discrepancy function include Bayesian inference <cit.> and Gaussian processes <cit.>. In more recent years, bias correction by means of neural networks has also been attempted <cit.>. Specifically, Um et al. <cit.> successfully calculate a correction term for a fluid flow simulation by integrating the solver directly into the training loop of a cnn. Nevertheless, this approach, calculates a correction operator and not the discrepancy function itself. In this work, we approximate the discrepancy functions between low and high-fidelity fe simulations of dynamical systems, by employing a rnn/fe-based hybrid model, which is trained on sparse, non-uniformly sampled trajectory data of the high-fidelity solution. The high and low-fidelity simulations differ by choice of computational mesh and modeling assumptions, consequently inducing both discretization and modeling errors. Similarly, the rnn/fe-based hybrid model is constructed by splitting spatial and temporal dependencies. To account for the sparsity of the training data and prevent the overfitting phenomenon, the training data is upsampled artificially using linear interpolation operators and localized Gaussian priors at missing time steps to provide noise stabilization, an approach commonly used in game design and image processing <cit.>. Local weighting factors on the training data are also employed to control the interpolation behavior of the model. The hybrid model is applied to three engineering-relevant test cases, namely, heat diffusion out of a heat sink, eddy-currents in a quadrupole magnet, and sound wave propagation in a cavity. The results show that the proposed hybrid model and training regimen is capable of approximating model discrepancies to a high degree of accuracy. The main contribution of this work lies in the application of rnn for discrepancy function approximation in the setting of multi-fidelity modeling and simulation, in combination with the fe method. Even though surrogate modeling has been used extensively to approximate high-fidelity simulations <cit.>, using rnn to learn model errors has been first proposed in <cit.>, however, not in combination with fe-based models and with only limited numerical experiments that do not include highly non-trivial engineering applications, as the ones available in the present work. Additional contributions concern the specific architecture of the hybrid model, as well as the treatment of hybrid model training with sparse data, such that the physically correct interpolation behavior is ensured. The remaining of this paper provides, first, a theoretical overview of data-driven discrepancy approximation for model validation and bias correction (Section <ref>), followed by a presentation of the suggested hybrid modeling approach (Section <ref>), which highlights the hybrid model architecture and discusses the treatment of sparse training data. Numerical experiments showcasing the benefits of the proposed hybrid modeling approach are presented in Section <ref>. Last, concluding remarks and considerations for further methodological developments are available in Section <ref>. § PRELIMINARIES AND NOTATION §.§ Dynamical system simulation The simulation of dynamical systems commonly requires the spatial and temporal discretization of an underlying bvp, defined on Ω× [0,T], where Ω⊂ℝ^d denotes the spatial domain at dimension d ≤ 3 and T ∈ℝ_≥ 0 the time range. Spatial discretization refers to the approximation of the system states by a finite dimensional basis representation and temporal discretization refers to state propagation along a discrete time axis. Therein, implicit time stepping schemes are frequently used, due to their advantages in numerical stability. The discrete dynamical system can be described by 𝐱_t_k+1 = Ψ(𝐱_t_k+1,𝐱_t_k,t_k+1, t_k), where Ψ:ℝ^d→ℝ^d is a time propagation operator and 𝐱_t_k:Ω→ℝ^d with 𝐫↦𝐱_t_k(𝐫), 𝐫∈Ω, are the system states at time t=t_k where k is the time step indexing. In the case where the spatial discretization does not change over time, Ψ only operates on the dof of the states' basis coefficients. Accordingly, we can express (<ref>) as 𝐱̂_t_k+1 = Ψ(𝐱̂_t_k) with Ψ:ℝ^N_dof→ℝ^N_dof and 𝐱̂∈ℝ^N_dof, where N_dof∈ℕ denotes the number of spatial dof. Given an initial state 𝐱_t_0, we can solve (<ref>) for N_T ∈ℕ time steps of size Δ t∈ℝ. The solution is an approximation of the states' trajectories, the accuracy of which is dictated by the values of N_dof and N_T and the time stepping scheme. In this work, we focus on the fe method for spatial discretization and an implicit Euler scheme for time-stepping. §.§ Model fidelity and discrepancy Real-world scientific and engineering applications governed by dynamical problems are inherently complex, often requiring a large number of dof and time steps to be resolved to sufficient accuracy. Quite often, solution accuracy must be balanced against limitations in computational resources. A predominant factor in this trade-off is model fidelity, that is, the model's capability to precisely represent the physical system it approximates <cit.>. In the following, high- and low-fidelity models are denoted with Ψ_hifi and Ψ_lofi, respectively, as they will refer to dynamical system models, as they were introduced in Section <ref>. Accordingly, low- and high-fidelity system states at time t_k are respectively denoted with 𝐱_t_k^lofi and 𝐱_t_k^hifi. The difference in the results between a low- and a high-fidelity model can be predominantly attributed to a systematic error, which is commonly called model discrepancy or prediction bias. Model discrepancy can be quantified using a so-called discrepancy function δ_t :ℝ^d →ℝ^d with δ_t(𝐫) = 𝐱_t^hifi(𝐫) - 𝐱_t^lofi(𝐫), which maps the system state onto its corresponding systematic error <cit.>. Non-systematic errors such as noise can similarly be expressed using an error function ε_t:ℝ^d→ℝ^d, where a common assumption is ε_t ∼𝒩(0,Σ), Σ∈ℝ^d× d. Under standard assumptions regarding the nature of the error and bias terms, for example, additivity or multiplicativity, the relationship between Ψ_hifi and Ψ_lofi can be expressed explicitly. For instance, under the additive bias and noise assumption, it holds that 𝐱_t_k^hifi = 𝐱_t_k^lofi + δ_t_k + ε_t_k, for k = 1,...,N_T. Multiplicative or more complex relations are also possible. For the remaining of this work, we neglect the noise term in (<ref>) and focus solely on the discrepancy function. §.§ Data-driven discrepancy function approximation for model validation and correction In the field of model validation, one is interested in whether the discrepancy function, δ_t, fulfills certain conditions, as to determine whether the use of Ψ_lofi is justifiable <cit.>. In this context, a very general validation condition reads 1/T∫_[0,T]δ_t^2_2 dt < C, where C>0 and ·_2 the L^2(Ω) norm. If δ_t fulfills (<ref>), the use of Ψ_lofi is tenable <cit.>. To that end, the functional form of δ_t_k, which essentially quantifies the accuracy of Ψ_lofi with respect to Ψ_hifi, must be inferred. While this could be accomplished by sufficiently sampling the high- and the low-fidelity models, the evaluation of the high-fidelity model must in many cases be avoided, typically due to its prohibitive computational cost. Instead, observed snapshot data 𝐗:={𝐱^o_t_k}_t_k∈ T_o, T_o = {t_k}_k=1^N_T, of its state trajectory may be available <cit.>, for instance, originating from measurement setups or auxiliary simulations. To that end the superscript “o” is chosen to denote an observation. Then, the validation of the low-fidelity model requires a parametric model δ_θ, which approximates the discrepancy based on the snapshots 𝐗. This approximation can be accomplished with supervised learning algorithms. However, using data as a reference increases substantially the complexity of the validation problem, since 𝐗 might contain sparse, non-uniformly sampled, and noisy data. Possible remedies to this issue can be sought in novel physics-informed ml approaches, which integrate physics-inspired objective functions into the approximation process to induce physics conforming behavior in sparse data regimes <cit.>. Irrespective of the choice of the parametric model δ_θ and the multi-fidelity models, the minimization problem to be solved in order to determine an optimal parameter set θ^* given the training data set 𝒟:={𝐱_t_k^o - 𝐱_t_k^lofi}_t_k∈ T_o, reads θ^* = _θ𝒥_θ, where 𝒥_θ= 1/T∫_0^Tδ_t -δ_θ_2 dt, for δ_t ∈𝒟. The parametric model δ_θ:ℝ^d →ℝ^d for θ∈ℝ^N_θ, where N_θ denotes the number of parametric dof, can be chosen to map the low-fidelity states 𝐱^lofi_t_k↦δ_θ onto the respective systematic error for all time instances t_k ∈ T_o and points in the spatial domain 𝐫∈Ω <cit.>. We note however, that this is one example and that other mappings are also possible. Assuming a successful solution of the minimization problem (<ref>), there are numerous uses for the function δ_θ^*. On the one hand, δ_θ^* can be used to approximate the validation condition (<ref>), ultimately verifying the low-fidelity model. On the other hand, it can be used to for bias correction, where we define a corrected system state 𝐱_t_k^corr = 𝐱_t_k^lofi + δ_θ^*(𝐱_t_k^lofi), where δ_θ^* models the discrepancy between the low-fidelity model and the reference data. The state trajectory resulting from (<ref>) is the bias corrected trajectory of the low-fidelity model. In any case, model validation or correction depend significantly on the accuracy of the parametric model. §.§ Modeling and discretization errors in transient finite element models Multi-fidelity modeling and simulation introduces different types of systematic errors. In the particular case of fe analysis, these errors can be categorized into three different classes, namely, discretization errors, numerical errors, and modeling errors <cit.>. Discretization errors occur when a function of a continuous variable is approximated by a finite dimensional basis representation, and primarily depend on the mesh resolution and the choice of the finite dimensional ansatz space. Reducing the discretization error requires a finer mesh resolution, thus increasing the number of dof. Numerical errors occur due to the finite precision of computation hardware, e.g., in the representation of real numbers as floating data types and the finite precision of iterative solvers. Truncation errors fall in the same category. Modeling errors occur due to assumptions and simplifications with respect to the problem itself. Common examples include misspecification of boundary conditions, incorrect definition of loading terms, linear approximations of otherwise nonlinear material responses, and 2D approximations of 3D geometries. For the approximation of the discrepancy function between fe models of varying fidelity, a suitable representation basis must be chosen. Assuming solely the discretization error δ^d, a natural choice would be to use the fe basis functions. In the context of this work, we use the fe basis of the low-fidelity model Ψ_lofi, as we focus on bias correction. Consequently, a low-fidelity representation of the high-fidelity system states must be made available. Thus, it is necessary to define a projection operator 𝒯:ℝ^N_dof^hifi→ℝ^N_dof^lofi, where N_dof^hifi and N_dof^lofi denote the number of dof of the high- and the low-fidelity model, respectively. [Note that, when considering model validation, a projection 𝒯:ℝ^N_dof^lofi→ℝ^N_dof^hifi is required, to get a high-fidelity representation of the low-fidelity states.] Assuming Ψ_hifi and Ψ_lofi to be defined on the same time grid, the discretization error at time step t_k is given as δ^d_t_k = 𝒯( 𝐱_t_k^hifi) - 𝐱_t_k^lofi_2, where 𝐱_t_k^hifi=Ψ_hifi(𝐱^hifi_t_k-1) and 𝐱_t_k^lofi=Ψ_lofi(𝐱^lofi_t_k-1) are the high-fidelity and low-fidelity states at time t. On the other hand, modeling errors δ^m occur in the assumptions that define the bvp, ultimately affecting the definition of the time propagators. This fact is illustrated most clearly by assuming that Ψ_hifi and Ψ_lofi are parametrized by parameter sets λ:=(λ_1,...,λ_l), such that 𝐱_t_k = Ψ(𝐱_t_k-1 | λ), and defined on the same computational mesh. Consequently, if λ_hifi≠λ_lofi, the high-fidelity model 𝐱^hifi_t_k=Ψ (𝐱^hifi_t_k-1 | λ_hifi) and the low-fidelity model 𝐱^lofi_t_k = Ψ (𝐱^lofi_t_k-1 | λ_lofi) with 𝐱^hifi_t_0=𝐱^lofi_t_0=𝐱_t_0. At each time step, this error can be quantified as δ^m_t_k = 𝐱_t_k^hifi - 𝐱_t_k^lofi_2. In most multi-fidelity modeling settings, discretization and modeling errors are present simultaneously. Thus, a combined error term must be considered, which captures both. Considering (<ref>) and (<ref>), the combined error term can be quantified as δ_t_k = 𝒯( 𝐱_t_k^hifi) - 𝐱_t_k^lofi_2, where 𝐱_t_k^hifi=Ψ_hifi(𝐱^hifi_t_k-1 | λ_hifi) and 𝐱_t_k^lofi=Ψ_lofi(𝐱^lofi_t_k-1 | λ_lofi) with 𝒯(𝐱_t_0^hifi)=𝐱_t_0^lofi. In the following sections, (<ref>) provides the basis for preprocessing the training data. § HYBRID MODELING METHODOLOGY §.§ Hybrid model elements and architecture In the following section, we discuss the nature of the discrepancy function δ_t and our subsequent choice for the parametric model δ_θ≈δ_t. Discrepancy functions often display complex dynamics, exhibiting piecewise smooth and non-smooth behavior in the problem domain. These dynamics occur because discrepancy functions are required to capture harmonic propagating behavior, but also phase transitions, material interfaces, and interpolation errors. This phenomenon can be attributed to a variety of reasons, arising partially due to the underlying physics, but also due to the spatial and temporal discretization schemes, e.g., because of mesh inconsistencies or time stepping errors. Considering these challenges, we split the approximation of time-dependent and spatially-dependent effects, such that an rnn is used to approximate the sequential, temporal dynamics, and the low-fidelity fe-basis functions are used to account for the spatial effects. As we seek to calculate a bias correction term for the low-fidelity model, we assume that the discrepancy function has the form δ_t_k(𝐫) = ∑_i=1^N_dof^lofiδ̂_i,t_k ϕ_i(𝐫), where δ̂_i,t_k are the time dependent dof of the finite dimensional basis at time t_k and the i-th spatial dof, {ϕ_i}_i=1^N^lofi_dof are the basis functions of the low-fidelity model, and 𝐫∈Ω. We denote with δ̂_t_k:=(δ̂_1,t_k,...,δ̂_N_dof^lofi,t_k) the vector containing the coefficients of (<ref>). The suggested hybrid modeling approach then consists of using the rnn to learn the time-dependent dof , δ̂_t_k, at each time step. Due to their universal approximation property, as well as their use of residual connections to account for time-dependencies in the data, rnn offer sufficient flexibility to approximate the dynamics in the data. In particular, a subcategory of rnn called lstm cells, are used to circumvent the vanishing gradient problem, by sharing information between the individual units and using hidden state variables to control the gating mechanism and shape information flow <cit.>. In our case, the rnn architecture itself comprises of two building blocks, namely a concatenation of lstm cells and a linear output layer. A visualization is provided in Figure <ref>. In each training iteration, the rnn considers N_p consecutive time steps simultaneously, where the coefficients of the low-fidelity system states, 𝐱_t_k, serve as inputs and the outputs approximate the discrepancy function's coefficients, δ̂_t_k. We denote the mapping of the rnn with δ^ NN :ℝ^N^lofi_dof×{t_k,..., t_k+N_p-1}→ℝ^N^lofi_dof×{t_k,..., t_k+N_p-1}, with δ^ NN(𝐱̂_t_k,...,𝐱̂_t_k+N_p-1) = {δ^ NN_t_k,...,δ^ NN_t_k+N_p-1}≈{δ̂_t_k,...,δ̂_t_k+N_p-1}, where δ^ NN_t_k=(δ^ NN_1,t_k,...,δ^ NN_N_dof^lofi,t_k) denotes the component-wise output of the rnn. As numerous time steps are mapped simultaneously, the in- and output dimension of the rnn is ℝ^N^lofi_dof× N_p. Calculating (<ref>) with inputs {𝐱̂_t_k}_k=l · N_p^(l+1)N_p-1 for l=0,1,...,N^lofi_T/N_p-1 yields an approximation of δ̂_i,t_k on the whole trajectory. Choosing a similar basis representation as in (<ref>), we can define the rnn/fe hybrid model as δ_θ(𝐫) = ∑^N_dof^lofi_i=1δ^ NN_i,t_k ϕ_i(𝐫), ∀ k=1,...,N^lofi_T. As the fe basis functions already provide a basis for the discretized spatial domain, the rnn needs only to handle the variation of the coefficients in time, thus significantly simplifying the optimization problem and reducing training times. In Figure <ref>, the vectors 𝐜_t and 𝐡_t, t = t_k,...,t_k+N_p-1, denote the cell state and hidden state of the lstm cell. The cell state of the lstm consists of a parameter vector controlling the gating mechanism of the lstm cell and, ultimately, information flow, while the hidden state of the lstm consists of the cell output, which is simultaneously fed to the consecutive cell and to the output layer. We choose relu activation functions, which is a necessary choice for the approximation of the piecewise and non-differentiable behavior occurring in phase transitions. §.§ Normalization by non-dimensionalization A significant drawback of nn is their inability to adequately represent data with small values, a scenario which comes up when considering error functions. Thus, normalization procedures are in order, which scale the input data and significantly improve the performance of nn. Especially in physical systems, the quantities of interest can become very small as they are expressed relative to very small physical constants. Examples of such constants are the magnetic permeability μ∼ 10^-7 Vs/Am or the thermal conductivity κ∼ 10^-3 W/mK. Small physical geometries induce similar problems. The resulting error functions of such systems consist of small, fluctuating values, thus requiring a normalization procedure which scales the dynamical system appropriately, while remaining physically conforming at the same time. An example of such a scaling procedure is the non-dimensionalization of the physical system. In essence, non-dimensionalization removes the physical dimensions from underlying differential equations by suitable variable substitutions. The resulting differential equations have their physical dimensions partially or even completely removed. As an illustrative example, we present how the temporal and spatial differential operators transform under change of variables. Let 𝐫 = (r_x,r_y) and τ=t^-1_ct, r_x=r^-1_x,c r_x, r_y=r_y,c^-1 r_y for t_c [s], r_x,c [m], r^-1_y,c [m] ∈ℝ, be a coordinate transformation for which physical dimensions have been removed. The non-dimensionalised n-th time derivative as well as the Laplace operator transformation is given by ∂^n/∂ t^n = 1/t^n_c∂^n/∂τ^n and Δ = 1/√(|g|)∑_i=1^2∂_i(√(|g|)g_ii∂_i ) , with g = [ r^-2_x,c 0; 0 r^-2_y,c ], where g is the metric tensor and Δ the Laplace-Beltrami operator. The transformed wave equation then reads [1/t^2_c∂^2/∂τ^2 + c(1/r^2_x,c∂^2/∂r_x^2 + 1/r^2_y,c∂^2/∂r_y^2)]u(τ,r_x,r_y) = f(τ,r_x,r_y), where the solution to the original system can then be recovered by the backwards transformation t=t^-1_cτ, r_x=r_x,c^-1 r_x and r_y=r_y,c^-1 r_y. The resulting system can then be scaled such that nn can optimally fit the data, as well as conform to the physical equations. §.§ Hybrid model training In this section, we discuss the training of the rnn, including details on the choice of training data and how we deal with data sparsity. The training data set consists of snapshot data of the discrepancy function, which are calculated from the down-projected high-fidelity solution and the low-fidelity solution. Assuming N^hifi_T trajectory samples of the high-fidelity solution {𝐱_t_k^hifi}_t_k≤ N^hifi_T, we denote with T_hifi:={t_0,t_1,...,t_N^hifi_T} the respective time instances on which they are defined. Then, we can partition the time axis of the bvp into seperate intervals I_k:= [t_k, t_k+1] such that [0,T] = ⋃_k=0^N^hifi_T I_k holds for t_0=0 and t_N^hifi_T=T. As we can evaluate the low-fidelity model at more time instances than those included in the high-fidelity data, low-fidelity states 𝐱^lofi_t_k can be evaluated for t_k ∈ T_hifi, but also on intermediate time instances t_k<t_k_j<t_k+1 with j=1,...,N_I_k, where N_I_k denotes the number of intermediate time steps in the interval I_k. Consequently, the low-fidelity states 𝐱^lofi_t_k are defined on more time steps, namely T_lofi:=T_hifi∪{⋃_k=1^N^hifi_T⋃_j=1^N_I_kt_k_j}. We construct T_lofi by choice of the low-fidelity model, such that |T_hifi|≤ |T_lofi| holds and the time instances are uniform, i.e., t_k+1 - t_k = Δ t, ∀ t_k, t_k+1∈ T_lofi. Especially the latter point is important for the inputs of rnn. For a visualization of the difference between the sets T_hifi and T_lofi, see Figure <ref>. Given theses considerations, the training data consisting of discrepancy function snapshots is given as the set 𝒟_d:={𝒯(𝐱^hifi_t_k) - 𝐱^lofi_t_k}_t_k∈ T_hifi, where 𝒯 is a linear projection operator and t_k ∈ T_hifi. Note that the sampled instances of the high-fidelity data are not chosen randomly. Instead, they depend on the dynamics of the underlying physical system. In areas where the high-fidelity systems is very dynamic, for instance, when it is heavily excited, the trajectory is sampled more frequently than in areas where the system approaches steady state. In this work, we sample heuristically and note that there are more sophisticated methods to perform the data sampling, for instance, using active learning or optimal experimental design techniques. Whilst sparse data sets are the norm for most practical scenarios, for example in measurement setups, nn typically show bad interpolation behavior when trained solely with sparse data sets. This is partially due to the fact that no knowledge of the objective function is provided in sparsely sampled areas, wherein the rnn's interpolation accuracy decreases dramatically. In these scenarios, one has to “inform” the nn on the correct mode of behavior. One way to achieve this would be to add physically motivated constraints to the loss function, an approach frequently pursued in so-called physics-informed machine learning <cit.>. Another approach is to upsample the available training data as to reflect correct physical behavior, an approach commonly used in game design and image processing <cit.>. The main difference between these two approaches is on how the information regarding the correct interpolation behavior is encoded. In the former case, it is encoded in the formulation of the optimization problem. In the latter case, in the artificial (upsampled) data points the model is trained with. The latter approach is explained in more detail in the next section. §.§ Localized data upsampling and noise stabilization In this work, we resort to the approach of data upsampling, such that model training is performed within a standard supervised learning context. Introducing physics-inspired loss functions, whilst also a promising avenue, has the drawback of significantly complicating the optimization problem, often resulting in elongated training times. We propose an upsampling scheme based on a combination of localized, linear interpolation to calculate intermediate artificial system states for t_k∈ T_up:=T_hifi\ T_lofi, and a Gaussian prior for noise stabilization to prevent the overfitting phenomenon. Noise stabilization is a common approach in Gaussian Processes to improve numerical stability <cit.> and can be used to prevent nn from overfitting. The linear interpolation aspects controls the nn's behavior in the sparse data regime, whereas the prior distribution prevents the nn from overfitting. In each training epoch, the prior distribution is sampled to generate new artificial states that are locally bounded by the variance of the Gaussian prior. We choose a linear interpolation approach due to its simplicity and ease of use, as well as its applicability to a large category of problem types. We denote the locally linear interpolation function δ_I_k:I_k →ℝ^N^lofi_dof as Sorting the index set of the training data J={j_k}_t=1^N_|J| in ascending order, local, linearly interpolated states can be computed via δ_j_t+l = δ_j_k+l(δ_j_k+1-δ_j_k/j_k+1-j_k), l=1,...,j_k+1-j_k-1, 𝒟_up={δ^*_j_t+l∼ p(δ_j_k,i) | ∀ j_k∈ J and l=1,...,j_k+1-j_k} δ_I_k(t) = δ_t_k t_k+1 - δ_t_k+1 t_k/t_k+1-t_k + t (δ_t_k+1 - δ_t_k/t_k+1-t_k). Essentially, (<ref>) interpolates linearly in intervals of sparse data, based on the boundary values found in the training data. We apply noise stabilization by assuming a Gaussian prior on the artificial intermediate states. Let δ_i,t_k_j = δ_I_k(t_k_j)|_i be evaluated in t_k_j∈ I_k for j=1,...,N_I_k and restricted to the i-th spatial dof. Then, we define the Gaussian prior p(δ_i,t_k_j) = δ_i,t_k_j + 𝒩(0,αδ_i,t_k_j_2), for i = 1,...,N^lofi_dof and t_k_j∈ T_up, where α∈ℝ is a weighting factor controlling the variance. The distribution (<ref>) is defined locally for the individual dof, where a lower variance is assumed for dof close to zero, thus preserving known boundary conditions, and a larger variance is allowed in domains where the discrepancy is non-zero. An upsampled data set 𝒟_up for t_k_j∈ T_up based on (<ref>) is then given as 𝒟_up:={δ_t_k_j∈ℝ^N^lofi_dof | δ_i,t_k_j∼ p(δ_i,t_k_j), i = 1,...,N^lofi_dof}_t_k_j∈ T_up. In each training epoch of the rnn, 𝒟_up is generated by sampling (<ref>) for all spatial dof. Thus, the sparse time series is upsampled to a complete trajectory using the varying artificial system states in 𝒟_up and the elements of 𝒟_d which remain fixed as part of the ground truth. The training of the rnn is based on a locally weighted loss function, given by 𝒥_θ = 1/N^lofi_dof( ∑_t_k∈ T_hifiβ_t_k |δ_t_k - δ^ NN_t_k |^ground truth + ∑_ t_k_j∈ T_up|δ^*_t_k_j - δ^ NN_t_k_j |^artificial states), where δ_t_k∈𝒟_𝒹 are part of the ground truth states, δ^*_t_k_j∈𝒟_up the sampled artificial states, β_t_k∈ℝ a local, time-dependent weighting factors, and |·| the Euclidean norm. In the initial training stages, we choose β_t_k=1, ∀ t_k∈ T_hifi, until the rnn has a coarse fit on the data. In the later training stages, we change the local weighting factors on 𝒟_d to β_t_k=1+δ_t_k - δ_θ_2, which are calculated by numerical quadrature. Note that β_t_k≥ 1. In that way, we ensure that the data set 𝒟_d is always weighted more heavily than 𝒟_up during rnn training. For the optimization, we use the adam algorithm with learning rate decay. § NUMERICAL EXAMPLES In the following numerical investigations, we employ the proposed hybrid model to approximate discrepancy functions in three engineering test cases governed by transient pde, namely, heat diffusion on a heat sink, eddy-currents in a quadrupole magnet, and sound wave propagation inside a cavity. For each test case, we consider a high-fidelity and a low-fidelity fe representation of the bvp. The difference between the two lies in the fe mesh refinement, as well as in modeling errors in the material laws, excitation, and domain geometry, depending on the test case. To highlight the necessity of the upsampling approach proposed in Section <ref>, we consider hybrid models with and without data upsampling and observe the impact of overfitting in the latter case. Both simulations are then compared to reference data of the discrepancy function, which is calculated from densely sampled high-fidelity data. The trained hybrid models are then employed for bias correction of the low-fidelity models using (<ref>), leading to significantly more accurate results. To assess the accuracy of the discrepancy function approximation and the bias corrected model, we consider the relative L^2(Ω) errors Δ_L^2 δ_θ := ∫_[0,T]δ_θ - δ_t _2 dt/∫_[0,T]δ_t _2 dt and Δ_L^2 𝐱 := ∫_[0,T]𝐱_t - 𝒯(𝐱^hifi_t) _2 dt/∫_[0,T]𝒯(𝐱^hifi_t) _2 dt, where the ·_2 is approximated via numerical integration over the computational mesh and ∫_[0,T]· dt via Riemannian sums. Last, considering the fe simulations, we restrict ourselves to the 2D case without loss of generality. Consequently, we can restrict our space of test functions to H_0(grad; Ω) := {u∈ L^2(Ω) with ∇(u)∈ L^2(Ω) and u|_∂Ω=0 }, for the derivation of the fe formulation in all test cases. Thus, w ∈ H_0(grad; Ω) are defined by the mesh nodes of the triangulation of Ω. For all test cases, we choose first order shape functions. For the fe implementation we use <cit.> and <cit.>. rnn are implemented using / <cit.>. §.§ Heat diffusion on a heat sink As first test case, we consider the heat diffusion problem on a 2D cross-section of a heat sink, see Figure <ref> (left). The heat sink geometry is defined on the domain Ω=[-l,l]^2, l∈ℝ, which consists of a thermally conductive region Ω_con and a less thermally conductive air region Ω_air. The boundary of the geometry consists of ∂Ω_d= ∂Ω∩∂Ω_air, to which we apply homogeneous Dirichlet boundary conditions and ∂Ω_nd = ∂Ω∩∂Ω_con, to which we apply non-homogeneous Dirichlet boundary conditions. The bvp reads ρ c_v ∂ u/∂ t - ∇·(κ ∇ u) =0 on Ω, u|_∂Ω_nd = c, u|_∂Ω_d = 0, where u is the temperature, ρ c_v the heat capacity, κ the thermal conductivity, and c a fixed temperature. The non-homogeneous Dirichlet boundary condition signifies a heat source underneath the heat sink, while the homogeneous Dirichlet boundary conditions assume constant temperature on the boundary. On the fins of the heat sink, we assume a heat conductivity of κ = 50 W(mK)^-1, following a linear deterioration of the heat conductivity by 25% close to the proximity of the tips. This deteroration can be attributed to material aging or other defects. Figure <ref> (right) shows the thermal conductivity including material defects, κ_hifi, plotted against the thermal conductivity without material defects, κ_lofi, along the middle fin of the heat sink. In Ω_air we assume κ = 0.5 W(mK)^-1. To solve the heat diffusion problem using the fe method, we introduce the corresponding variational form by multiplying (<ref>) with a test function w_i∈ H_0(grad; Ω) and integrate over the domain. The variational form reads ∫_Ωρ c_v ∂ u/∂ t w_i dΩ - ∫_Ω∇·(κ∇ u) w_i dΩ = 0, for i=1,...,N_dof^lofi. Applying integration by parts and first order difference quotients ∂ u/∂ t = u_t_k+1 - u_t_k/Δ t results in the implicit Euler time stepping scheme ∫_Ωρ c_v u_t_k+1 w_i dΩ + Δ t ∫_Ωκ∇ u_t_k+1·∇ w_i dΩ = ∫_Ωρ c_v u_t_k w_i dΩ. We discretize the temperature u=∑^N^lofi_dof+N^lofi_bdry_j=1û_j w_j. The first N^lofi_dof shape functions w_j∈ H_0(grad; Ω) for j=1,...,N^lofi_dof are the trial functions and equal to the test functions, adopting the Ritz-Galerkin approach. {û_j}_j=1^N^lofi_dof are the dof. The additional N^lofi_bdry shape functions w_j for j=N^lofi_dof+1,...,N^lofi_dof+N^lofi_bdry together with the coefficients {û_j}_j=N^lofi_dof+1^N^lofi_dof+N^lofi_bdry discretize the non-homogeneous Dirichlet data. The resulting system reads (Δ t 𝐀 + 𝐌) û_t_k+1 = 𝐌 û_t_k, where 𝐀 is the stiffness matrix and 𝐌 the mass matrix and the individual entries of the respective matrices are given as 𝐌_ij = ∫_Ωρ c_v w_i w_j dΩ and 𝐀_ij = ∫_Ωκ∇ w_i ·∇ w_j dΩ. The columns of 𝐀 and 𝐌 for j=N^lofi_dof+1,...,N^lofi_dof+N^lofi_bdry are shifted from the left to right-hand side of (<ref>) by applying a Dirichlet lift <cit.>. For the simulation with either model, i.e, low- or high-fidelity, we assume ρ c_v=1 JK^-1m^-3, c=10 K, Δ t = 2· 10^-2 s, and N_T=100. The low-fidelity model Ψ_lofi(û_t_k^lofi | κ_lofi) has thermal conductivity κ_lofi with N_dof^lofi=278, while the high-fidelity model Ψ_hifi(û^hifi_t_ k | κ_hifi) has thermal conductivity κ_hifi and N_dof^hifi= 1408. The different material choices and mesh discretizations induce modeling and discretization errors between low-fidelity and high-fidelity model. A visualization of the heat distribution of the high-fidelity and low-fidelity model is depicted in Figure <ref> (rows 1 and 3). In both cases, we can observe that the heat sink rapidly transports heat from the excited base to its surroundings, eventually reaching a steady-state. To approximate the discrepancy function, we employ 19 out of 100 trajectory samples as training data, where more samples are chosen in the initial stages of excitation and a reduced number of samples when the system reaches steady-state. The training data instances are depicted as crosses at the respective time steps in Figure <ref>. We train the rnn according to the parameters in Table <ref>, from which we observe that 1000 training epochs are required to reduce Δ_L^2 δ_θ of the upsampled model to 0.27 % and equivalently, the non-upsampled model to 8.187 %. In Figure <ref> we display the spatially integrated discrepancy function δ_t_2 in each time step, once for the hybrid model, with and without upsampling, and once for reference data. In case the hybrid model is trained solely on the sparse data set, we observe a good agreement on the training data, however, large errors appear for previously unseen data, due to overfitting. This phenomenon is especially pronounced in areas where there exist larger gaps in the training data, for example between the time steps t_30=0.6 s and t_100=2 s. However, we also note that upsampling is not necessary in all regions on the trajectory for the hybrid model to exhibit correct interpolation behavior, as can be observed between time steps t_5=0.1 s and t_15 = 0.3 s for the non-upsampled model. In Figure <ref> (row 2), we observe the absolute value of the discrepancy function between the low and high fidelity model. As expected, the maximum error is observed at the tips of the heat sink fins, that is, in the area where the material defect occurs. Figure <ref> (row 4) shows the bias corrected model, for which we have Δ_L^2 u^corr=7.59· 10^-3 %. Compared to Δ_L^2 u^lofi=2.73 %, this is a significant correction to the low-fidelity model. §.§ Transient eddy-current problem in a quadrupole magnet As second test case, we examine the transient eddy-current problem on a 2D cross-section of a quadrupole magnet. The geometry is depicted in Figure <ref> (left) <cit.>. The quadrupole is defined on a circular domain Ω, which consists of an iron yoke Ω_Fe, coils for current excitation Ω_s, and an aperture Ω_p. The domain boundary ∂Ω consists of the outer boundary of the iron domain ∂Ω_Fe. The bvp describing the dynamical system is given by the magneto-quasistatic Maxwell equations. ∇×𝐡 = 𝐣_s, ∇×𝐞 = - ∂𝐛/∂ t, ∇·𝐛 = 0, ∇·𝐣 = 0 , where 𝐡 [ Am^-1 the magnetic field strength, 𝐛 Vsm^-2 the magnetic flux density, and 𝐣_s [ Am^-2 the current density. The constitutive relations 𝐡 = ν 𝐛 and 𝐞 = κ 𝐣_s with ν [H^-1 as the magnetic reluctivity and κ [ Sm^-1 as the electric conductivity describe the material responses. Choosing the vector potential ansatz 𝐛=∇×𝐚, where 𝐚 is the magnetic vector potential and 𝐛 the magnetic flux density, the eddy-current problem can be given as the time-dependent bvp ∇×(ν∇×𝐚) + σ∂𝐚/∂ t = 𝐣_s, 𝐚|_∂Ω = 0, where 𝐣_s the source current density. For the current excitation, we assume the current I_hifi(t) depicted in Figure <ref> (right). This choice of current excitation is motivated qualitatively by the ramping procedure that provides a linear field increase during acceleration with current plateaus for beam insertion and extraction and a current decay after switch-off. An approximate current density I_lofi(t) is additionally considered, also depicted in Figure <ref>, which consists of a linear ramp and de-ramp. In both cases, the currents are distributed in the excitation domain, yielding a current density that can be calculated by 𝐣_s(𝐫,t) = |Ω_s|^-1 I(t) 𝐞_z, where 𝐞_z is the unit vector in z-direction. To confine the magnetic quadrupole field inside the magnet, we choose homogeneous boundary conditions on ∂Ω. Similar to the previous section, we express (<ref>) in its variational form and spatially discretize with vectorial first-order shape and test functions over a triangulation of Ω. Due to geometrical considerations, we can safely neglect transverse effects of the vector potential, which results in a single vectorial component 𝐰_i = w_i 𝐞_z, with w_i∈ H_0(grad ; Ω). The magnetic vector potential is approximated via the ansatz function 𝐚 = ∑_j=1^N_dof^lofiâ_j 𝐰_j, where the dof {â_j}_j≤ N_dof^lofi lie on the mesh nodes. In matrix-vector notation, the fe formulation reads (Δ t 𝐀 + 𝐌) â_t_k+1 = Δ t 𝐛(t_k+1) + 𝐌 â_t_k, where 𝐀 and 𝐌 are the stiffness and mass matrix, respectively, and 𝐛 is the loading vector. The entries of 𝐀 and 𝐌 are given as 𝐀_ij = ∫_Ω(ν∇×𝐰_i ) · (∇×𝐰_j ) dΩ and 𝐌_ij = ∫_Ωσ 𝐰_i ·𝐰_j dΩ, while the entries of the right hand side loading vector are 𝐛_i( t_k) = ∫_Ω𝐣_s(t_k ) ·𝐰_i dΩ. For the simulation, we assume σ_Fe = 1.04 · 10^7 Sm^-1 and ν_Fe = 2 · 10^-3 ν_0 in Ω_Fe, and σ = 1 Sm^-1 and ν = ν_0 in Ω_p and Ω_s. Furthermore, we assume a constant time step of Δ t = 1· 10^-2 s and N_T=327 time steps. The low-fidelity model Ψ_lofi(â^lofi_t_k+1, â^lofi_t_k,Δ t | I_lofi(t_k+1)) is parametrized with the current I_lofi and mesh resolution N_dof^lofi = 895, while the high-fidelity model Ψ_hifi(â^hifi_t_k+1, â^hifi_t_k, Δ t | I_hifi(t_k+1)) with current I_hifi and N_dof^hifi=277 594. Both multi-fidelity models are non-dimensionalized as described in Section <ref>, such that the magnetic vector potential is normalized to the unit square. A visualization of the potential distribution of the high-fidelity and low-fidelity model is depicted in Figure <ref> (rows 1 and 3). In both cases, we can observe the eddy current phenomenon occurring in the iron yoke ∂Ω_Fe. As training data, we use 79 out of 327 trajectory samples at the respective time steps, depicted as crosses in Figure <ref>. In this test case, we increase the samples in areas where the current excitation changes from an increasing to a decreasing flank. We train the model according to the parameters in Table <ref>, from which we observe that 1000 training epochs are required to reduce Δ_L^2δ_θ to 1.538% for the upsampled model, and to 8.187% for the non-upsampled model. In Figure <ref> we display the spatially integrated discrepancy function δ_t_2 at each time step, for the hybrid model with and without upsampling and the reference data. If the hybrid model is trained using solely the sparse data set, we observe a good agreement on the data points. However, the model overfits and performs poorly for previously unseen data, albeit not as strongly as in the heat sink test case. This phenomenon is again prevalent in domains with sparse training data. Similar to the previous section, the hybrid model with artificial upsampling generally yields good agreement on the complete data set. However, there are certain areas in which the approximation is suboptimal, in particular when the current excitation changes from a rising to a falling flank. A visualization of the discrepancy is given in Figure <ref> (row 2), where we observe that the discrepancy primarily occurs at the interface between the iron and the air domain. In Figure <ref> (row 4) shows the bias corrected model, for which we have Δ_L^2 𝐚^corr=0.613 %. Compared to Δ_L^2 𝐚^lofi=39.847 %, this is a significant correction to the low-fidelity model. The correction is obvious both visually and by comparing the approximation errors of the low-fidelity and the bias corrected models. §.§ Wave propagation in a cavity For the third test case, we consider wave propagation in a closed cavity, where one commonly aims at detecting resonance frequencies and eigenmodes. After excitation, the wave propagates through the cavity until it eventually dissipates and the dominant frequencies are identified. As to the geometry of the cavity, we assume a “true” geometry Ω_hifi, with rounded-off corners between the connectors and the cavity itself, as well as an approximate design, Ω_lofi, with sharp corners at the connectors. Both geometries are depicted in Figure <ref> (left). The equation describing the bvp reads ∂^2 u/∂ t^2 - v^2 Δ u = f u|_∂Ω = 0 , where u is the acoustic pressure field, v is the velocity of the propagating wave, f ∈𝒞^∞(Ω) the excitation function, and Ω∈{Ω_lofi, Ω_hifi} the low and high-fidelity domains. For boundary conditions, we apply homogeneous Dirichlet boundary conditions on ∂Ω, such that the wave is reflected off the cavity walls and contained therein. The wave excitation f is a Gaussian impulse, as shown in Figure <ref> (right), applied as a point source at 𝐫_s∈Ω as indicated in Figure <ref> (left). Similar to the previous sections, we express (<ref>) in its variational form and spatially discretize with first-order shape and test functions w_i ∈ H_0(grad; Ω). The wave equation solution is approximated via u = ∑_j=1^N_dof^lofiû_j w_j, where the dof {û_j}_j≤ N_dof^lofi lie on the mesh nodes. As (<ref>) is an equation of second order, discretization in time requires a central differences scheme ∂^2 u/∂ t^2 = u_t_k+1-2u_t_k+u_t_k-1/Δ t^2. The resulting discretized fe system reads (𝐌+v^2 Δ t^2 𝐀) û_t_k+1 = Δ t^2 𝐛(t_k+1) + 2 𝐌 û_t_k-𝐌 û_t_k-1, where û is a vector containing the dof of the fe basis and the mass and stiffness matrices are given as 𝐌_ij = ∫_Ω w_i w_j dΩ and 𝐀_ij = ∫_Ω∇ w_i ·∇ w_j dΩ, and the right hand side loading vector 𝐛 is given at each time instance t_k as 𝐛_i(t_k ) = ∫_Ω f(t_k ) w_i dΩ. For the simulation, we assume v=343 ms^-1, Δ t = 4· 10^-5 s, and N_T = 100 time steps. The low-fidelity model Ψ_lofi(û_t_k+1^lofi, û_t_k^lofi, û_t_k-1^lofi, Δ t | Ω_lofi) uses the approximate cavity geometry Ω_lofi with N_dof^lofi = 169. The high-fidelity model Ψ_hifi(û_t_k+1^hifi, û_t_k^hifi, û_t_k-1^hifi, Δ t | Ω_hifi) uses the “true” cavity geometry Ω_hifi with N_dof^hifi = 3997. The modeling and discretization errors are here induced by the geometry variation and the difference in mesh discretization. A visualization of the wave propagation of the high-fidelity and low-fidelity model is depicted in Figure <ref> (rows 1 and 3). We observe that both waves propagate simultaneously, however, the low-fidelity solution is much coarser and its magnitude is significantly off. For the discrepancy function approximation, we use 51 out of 100 trajectory instances as training data, depicted as crosses at the respective time steps in Figure <ref>. The hybrid model is trained according to the parameters in Table <ref>, from which we observe that 2500 training epochs are required to reduce Δ_L^2δ_θ to 0.200% for the upsampled model and to 14.797% for the non-upsampled model. In Figure <ref>, the spatially integrated discrepancy function δ_t_2 is displayed at each time step, for the hybrid model with and without upsampling and the reference data. Similar to the previous sections, the hybrid model with artificial upsampling generally yields good agreement on the complete data set and the hybrid model without upsampling tends to overfit in regions with sparse data. Nevertheless, in this case, we require a larger training data set to achieve a good approximation, that is, slightly more than half of the available samples. This can be attributed to the fact that the underlying dynamical system is of second order and that the underlying discrepancy function exhibits locally non-smooth behavior. Consequently, higher demands are placed on the choice of linear upsampling scheme resulting in the training data to be sampled more densely. A visualization of the discrepancy function is given in Figure <ref> (row 2), where we observe in t_50=2· 10^-3 s and t_60=2.4· 10^-3 s that a significant portion of the error occurs near the variation in the geometry. In Figure <ref> (row 4), the solution of the bias corrected model is shown, for which we have Δ_L^2 u^corr=9.99· 10^-2 %. In comparison to Δ_L^2 u^lofi=27.53 %, we again verify that a significant correction to the low-fidelity model has been achieved. § CONCLUSION AND OUTLOOK This work presented a hybrid modeling framework for the approximation and correction of model bias in the setting of multi-fidelity modeling and simulation of dynamic/transient systems. The proposed hybrid modeling approach combines standard fe approximation with an rnn, where the former accounts for the spatial effects of the approximation and latter for the temporal dynamics. The hybrid model first approximates the discrepancy between models of varying fidelity and is subsequently used to correct the low-fidelity models and increase their accuracy. The presented numerical results show that the proposed hybrid modeling method is capable of yielding accurate approximations of the discrepancy function and, accordingly, significantly improved bias corrected models, for a variety of dynamical fe engineering simulations. In all considered test cases, the discrepancy function is approximated with an error below 2%, even if it displays a locally non-smooth behavior. Furthermore, the hybrid model remains accurate even if sparse training data sets are employed, given that a suitable upsampling scheme is provided, which controls the interpolation behavior in regions with only sparse data. Local weighting factors further ensure that the rnn provides a good fit of the training data and avoids over-smoothing. In all considered test cases, the rnn, consequently, the hybrid model is trained to sufficient accuracy after 1000 - 2500 epochs, thus resulting in quite modest training times, given the dimensionality of input and output. This is accomplished due to the nature of the proposed hybrid model, which splits spatial and temporal dependencies. The model already provides a spatial basis in form of the fe basis functions, hence, the rnn focuses on approximating the temporal dependencies only, which is a task very much suited to rnn in general. Despite the promising results shown in this work, there are limitations to the proposed hybrid modeling method worth mentioning. First, the accuracy of the hybrid model depends significantly on the upsampling scheme, especially within the domains with sparse data. The choice of upsampling scheme is thus problem dependent and requires assumptions on the underlying dynamical systems and discrepancy functions. In addition, the training data cannot be chosen completely at random, as samples within specific regions of the trajectory contain more information about the underlying dynamics than others. Introducing an optimal experimental design approach, e.g., knowing a priori which time steps to sample as to achieve a good approximation, would be advisable. Finally, the projection operator is problem dependent as well, depending primarily on the choice of basis functions used to represent the discrepancy function. In the context of this work, this choice comes naturally due to the use of the fe method, however, the generalization to other basis representations should be explored. To provide an outlook, there are numerous possible extensions to the presented methodology. Apart from considering more sophisticated rnn architectures, a possible avenue would be to explore localized Gaussian Processes with physics-inspired priors to upsample the physical data in a better way <cit.>. Another possibility would be to include physics inspired loss functions, similar to the concept of pinn <cit.>, as a substitute or complement to data upsampling. Last, the performance of hybrid models trained with real-world observations, for example, collected from measurements or experiments, could be investigated. In such cases, one would face the additional task noise treatment, exemplarily, separating noise from the underlying trajectory data. § ACKNOWLEDGEMENTS The authors would like to thank Armin Galetzka for thoroughly reading this manuscript and for providing helpful insights for improvements. Moritz von Tresckow acknowledges the support of the German Federal Ministry for Education and Research (BMBF) via the research contract 05K19RDB. Dimitrios Loukrezis and Herbert De Gersem acknowledge the support of the German Research Foundation (DFG) via the research grant TRR 361 (grant number: 492661287). plain
http://arxiv.org/abs/2307.00218v1
20230701043159
Optical N-plasmon: Topological hydrodynamic excitations in Graphene from repulsive Hall viscosity
[ "Wenbo Sun", "Todd Van Mechelen", "Sathwik Bharadwaj", "Ashwin K. Boddeti", "Zubin Jacob" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall", "cond-mat.str-el", "physics.flu-dyn" ]
APS/123-QED equal contribution Elmore Family School of Electrical and Computer Engineering, Birck Nanotechnology Center, Purdue University, West Lafayette, Indiana 47907, USA equal contribution Elmore Family School of Electrical and Computer Engineering, Birck Nanotechnology Center, Purdue University, West Lafayette, Indiana 47907, USA Elmore Family School of Electrical and Computer Engineering, Birck Nanotechnology Center, Purdue University, West Lafayette, Indiana 47907, USA Elmore Family School of Electrical and Computer Engineering, Birck Nanotechnology Center, Purdue University, West Lafayette, Indiana 47907, USA [email protected] Elmore Family School of Electrical and Computer Engineering, Birck Nanotechnology Center, Purdue University, West Lafayette, Indiana 47907, USA Edge states occurring in Chern and quantum spin-Hall phases are signatures of the topological electronic band structure in two-dimensional (2D) materials. Recently, a new topological electromagnetic phase of graphene characterized by the optical N-invariant has been proposed. Optical N-invariant arises from repulsive Hall viscosity in hydrodynamic many-body electron systems, fundamentally different from the Chern and Z_2 invariants. In this paper, we introduce the topologically protected edge excitation – optical N-plasmon of interacting many-body electron systems in the topological optical N-phase. These optical N-plasmons are signatures of the topological plasmonic band structure in 2D materials. We demonstrate that optical N-plasmons exhibit fundamentally different dispersion relations, stability, and edge profiles from the topologically trivial edge magneto plasmons. Based on the optical N-plasmon, we design an ultra sub-wavelength broadband topological hydrodynamic circulator, which is a chiral quantum radio-frequency circuit component crucial for information routing and interfacing quantum-classical computing systems. Furthermore, we reveal that optical N-plasmons can be effectively tuned by the neighboring dielectric environment without breaking the topological properties. Our work provides a smoking gun signature of repulsive Hall viscosity and opens practical applications of topological electromagnetic phases of two-dimensional materials. Optical N-plasmon: Topological hydrodynamic excitations in Graphene from repulsive Hall viscosity Zubin Jacob ================================================================================================= § INTRODUCTION Over the past few decades, the discoveries of topological phases and protected edge excitations of two-dimensional materials have gained a prominent role in condensed matter physics and photonics <cit.>. In graphene, the Chern invariant (C ∈ Z) originating from complex electron next-nearest-neighbor (NNN) hopping was first proposed to achieve a topological electronic phase without external magnetic fields <cit.>. The study of the corresponding chiral edge charge transport inspired discoveries beyond condensed matter physics  <cit.>, in photonics <cit.>, cold atoms <cit.>, and acoustics <cit.>. On the other hand, the Z_2 invariant (ν∈ Z_2) emerges in graphene in the presence of spin-orbit coupling and characterizes the quantum spin Hall phase <cit.>. Insights into the associated chiral edge spin transport have driven potential applications in spintronics <cit.> and topological light sources <cit.>. Recently, a new topological electromagnetic phase of graphene characterized by the optical N-invariant (N ∈ Z) was proposed <cit.>. This new topological phase arises only in the hydrodynamic regime of the interacting many-body electron system. The optical N-invariant characterizes the topology of bulk plasmonic as opposed to the electronic band structure and arises from the Hall viscosity of electron fluids. It is fundamentally different from the Chern and Z_2 invariant characterizing the topology of bulk electronic bands in graphene <cit.>. Inspired by this development, in this article, we introduce the topologically protected edge state – optical N-plasmon of this topological optical N-insulator and explore potential applications as well as control techniques. Recent interest has focused on the hydrodynamic regime of graphene in the electronic context, such as the violation of the Wiedemann-Franz law <cit.> and negative local resistance <cit.>. However, we note that the unique topological plasmonic behavior in the hydrodynamic regime is relatively unexplored. Our article here combines electrodynamics and hydrodynamics of graphene to uncover the topological properties. A related quantity, Hall viscosity in the static regime was measured for the first time recently <cit.> even though the theoretical prediction was made two decades ago <cit.>. It was shown that ν_H is connected to non-local Hall conductivity, non-local gyrotropy, and topological acoustic waves <cit.>. In this paper, we introduce the optical N-plasmon, a unique topological edge excitation that only occcurs in the many-body interacting hydrodynamic regime of graphene. We demonstrate that optical N-plasmons are fundamentally different from topologically-trivial edge magneto plasmons (EMPs), including the conventional EMP <cit.> in the characteristic dispersion relations, stability with respect to edge disorders, and edge profiles. We show that the dispersion of optical N-plasmons exhibits nontrivial topological nature and closes the bulk plasmonic bandgap (Fig. <ref>(a)). In stark contrast, dispersions of topologically-trivial EMPs fail to do so in general (Fig. <ref>(b)). We further reveal that, since the optical N-plasmon is topologically protected, it is not sensitive to either sharp boundary defects or edge disorders that can change the nature of electron-boundary scattering properties from diffusive to specular (see Fig. <ref>(c)). In contrast, EMPs not protected by topology can suffer from back-scattering and are generally unstable when certain edge disorders are present (see Fig. <ref>(d)). Finally, we also discuss that optical N-plasmons provide the experimental smoking gun signatures of the optical N-invariant in the 2D electron fluid. Our study provides a rigorous comparison of different regimes for the emergence of optical N-plasmons and other plasmonic excitations in graphene. Graphene provides an important platform for studying plasmonic excitations in the 2D interacting many-body electron system in different regimes <cit.>. In the non-interacting 2D electron gas (2DEG) regime, conventional gapless graphene plasmons and gapped graphene magneto plasmons were studied by identifying the zeros of dielectric functions <cit.>. We notice that non-local effects on conventional graphene (magneto) plasmons are considered within the random phase approximation <cit.>. Conventional EMPs also emerge in the non/weakly interacting regime where the 2D electron system can be described by the Euler equation without any viscous term <cit.>. Meanwhile, optical N-plasmons proposed in this article emerge only in the strongly-interacting hydrodynamic flow regime with repulsive Hall viscosity. We obtain dispersion relations of optical N-plasmons by finding the propagating solutions of the underlying hydrodynamic equations. Therefore, non-local effects originating from the viscous hydrodynamic model are naturally included. We develop an electromagnetic-hydrodynamic simulation based on the multiphysics model combining the linearized Navier-Stokes equations and electromagnetic equations. We employ experimentally-relevant parameters for simulating the optical N-plasmons in the 2D graphene interacting many-body electron system. Based on the optical N-plasmons, we propose the design of an ultra sub-wavelength broadband topological hydrodynamic circulator. Circulators are non-reciprocal circuit components important for microwave communications and quantum-classical information routing <cit.>. Many conventional ferrite or plasmonic circulator designs <cit.> are based on chiral EMPs not protected by topology <cit.>. The topological hydrodynamic circulator inherits robustness from optical N-plasmons, and the circulation behavior will not be perturbed by boundary defects or edge disorders. We simulate the performance of the proposed topological circulator with realistic graphene parameters. We show that the simulated frequency, momentum, and edge profile of the optical N-plasmon match well with the topological theory. We reveal that the optical N-plasmon can be effectively tuned by the neighboring dielectric environment without breaking its topological properties. Engineering plasmon properties is crucial for manipulating light in nano-devices <cit.>. We study the properties of optical N-plasmons in both transparent and opaque neighboring dielectric environments. We show that without introducing electrical contacts or structure deformations <cit.>, group velocities of optical N-plasmons can be tuned in a contact-free manner by controlling the fringing fields in neighboring dielectric materials. The controllability and the aforementioned compact and topological nature indicate potential applications of the optical N-plasmons in graphene plasmonics <cit.>. The paper is organized as follows. In Sec. <ref>, we discuss the hydrodynamic electron flow model and optical N-invariant. In Sec. <ref>, we study the dispersions and profiles of optical N-plasmons and other bulk and edge excitations in hydrodynamic electron fluids. We demonstrate the fundamental differences between the optical N-plasmon and other topologically trivial EMPs. In Sec. <ref>, we present the circulation of optical N-plasmons in the hydrodynamic topological circulator based on graphene electron fluids. In Sec. <ref>, we study the properties of optical N-plasmons in different neighboring dielectric environments. Section <ref> summarizes the paper and indicates further applications of optical N-plasmons for future research. § OPTICAL N-INVARIANT For completeness, we first summarize some key aspects of the topological optical N-invariant in graphene's viscous Hall fluid. Interacting many-body electron systems in various two-dimensional (2D) materials can be described by the hydrodynamic electron flow model when the momentum-conserving electron-electron scattering is dominant <cit.>. The optical N-invariant classifies the electromagnetic topology in the presence of electron-electron interactions through the bulk atomistic susceptibility tensor. It was shown that the optical N-invariant is the winding number of the atomistic susceptibility tensor <cit.>. This response function tensor is a many-body Green's function of the system, which has both spatial and temporal dispersion (i.e., momentum and frequency dependence). Here, due to the f-sum rule <cit.> and Hall viscosity ν_H, the susceptibility tensor is properly regularized. As a result, the originally unbounded 2+1D momentum-frequency space of this continuum model can be compactified and is topologically equivalent to S^2× S^1 <cit.>. Through the Green's function formalism <cit.>, a quantized integer topological invariant – optical N-invariant can be defined for this interacting many-body system <cit.>: N=sgn(ω_c)+sgn(ν_H), where ω_c is the cyclotron frequency and ν_H is the Hall viscosity. The topological phase is characterized by N=± 2 in the presence of a repulsive Hall viscosity ω_c ν_H >0, and the topologically trivial phase is characterized by N=0 with ω_c ν_H < 0. The optical N-invariant represents the topological property of the bulk plasmonic band structure and is fundamentally different from the Chern invariant and Z_2 invariant that are related to the bulk electronic band structure <cit.>. We emphasize that the topological hydrodynamic excitations in this paper have important implications beyond the linearized continuum model and can be generalized to include the lattice symmetry and local-field effects <cit.>. The topological protection is robust beyond the linear regime. The proof is related to the recently developed viscous Maxwell-Chern-Simmons theory, which connects the optical N-invariant with spin-1 eigenvalues at high-symmetry points <cit.> in momentum space. The U(1) gauge field of the 2D interacting fluid has a twist captured by the flip of spin-1 eigenvalues at high symmetry points. Thus any impurity or perturbation which does not cause spin-flipping for ultra-subwavelength (high momentum) plasmonic waves will not open the bandgap (between edge and bulk plasmonic states) in a topological optical-N insulator. The optical N-plasmon introduced in this paper occurs on the edge and is a smoking gun signature of repulsive Hall viscosity. For the bulk magneto-plasmons in hydrodynamic graphene, there exists a spin-1 skyrmionic behavior in momentum space <cit.>. The experimental probe of this momentum space skyrmion was predicted to be evanescent magneto-optic Kerr effect (e-MOKE) spectroscopy <cit.>. The sign change of the e-MOKE angle can shed light on this unique optical N-invariant of matter. We note that the Chern invariant and Z2 invariant of graphene do not capture these effects arising only in the many-body interacting hydrodynamic regime. Finally, we note that these unique edge states occur as super-symmetric partners between spin-1 excitations in Maxwell's equations and spin-1/2 fermions in the Dirac equation <cit.>. Gyrotropy in Maxwell's equations is analogous to an effective photon mass when compared to mass in the 2D Dirac equation <cit.>. The topological edge excitations in Maxwell's equations only occur from dispersive photon mass (non-local gyrotropy) of a specific sign: i.e., repulsive Hall viscosity. On the other hand, attractive Hall viscosity leads to a topologically trivial phase. Thus the signature of an optical N-phase in an ideal model can be considered to be massive spin-1 excitations in the bulk and massless linearly dispersing spin-1 excitations on the edge. However, no candidate material was known to exhibit these effects. One of our aims is to prove that graphene can exhibit these unique effects for experimental exploration. § OPTICAL N-PLASMONS §.§ Hydrodynamic electron flow model In this part, we present the hydrodynamic electron flow model considered in this work. In the hydrodynamic regime, electron transport is governed by the linearized Navier-Stokes equation with a viscous term <cit.>. The anti-symmetric part of the viscous tensor, Hall viscosity ν_H, can emerge in the 2D electron fluid when both time-reversal and parity symmetries are broken by an external magnetic field <cit.>. This non-dissipative Hall viscosity ν_H was first measured in ultra-clean graphene <cit.>. For an interacting many-body electron system, when the momentum-conserving electron-electron scattering is dominant, the linearized Navier-Stokes equations describing the hydrodynamics of electrons in 2D is <cit.>: ∂𝐉/∂ t=-v_s^2∇ρ - (γ-ν∇^2) 𝐉 - (ω_c+ν_H ∇^2) 𝐉×ẑ +e^2n_0/m𝐄, ∂_tρ+∇·𝐉=0. Here, 𝐉=(J_x,J_y) is the 2D current density, v_s^2=v_F^2/2 represents the compressional wave velocity in the 2D electron fluid, v_F is the Fermi velocity, ρ is the charge density, γ is the damping rate, ν is the ordinary shear viscosity, ω_c=eB/(mc) is the cyclotron frequency, ν_H is the Hall viscosity, n_0 is the electron density, and m is the effective electron mass. Within the quasi-static approximation, electric field 𝐄=-∇ϕ. In the absence of external free charges out of the electron fluid plane, 𝐄 arises from the fringing fields in the surrounding medium. The second term on the RHS of Eq. (<ref>) describes dissipation in the electron fluid. Meanwhile, the third term (ω_c+ν_H ∇^2) 𝐉×ẑ is dissipation-less and will only emerge in the 2D electron fluid when both time reversal symmetry 𝒯 and parity symmetry 𝒫 are broken at the same time by an external magnetic field B. Continuity Eq. (<ref>) describes the charge conservation law for electrons. §.§ Bulk magneto plasmons We first discuss the bulk magneto plasmons in the 2D electron fluid with Hall viscosity. We consider the dielectric material surrounding the 2D electron fluid is isotropic with an effective permittivity tensor ε=εI. From Eq. (<ref>,<ref>), in the low-loss limit (γ,ν→ 0), we can solve the dispersion of bulk magneto plasmons by considering propagating bulk modes of the form e^i(𝐪·𝐫 - ω t): ω^2=2π e^2 n_0 |q|/m ε+v_s^2 q^2 +(ω_c - ν_H q^2)^2. Here, the last term in Eq. (<ref>) originates from the external magnetic field and opens the bandgap between bulk bands at q=0. As q→ + ∞, the bulk magneto plasmon dispersion ω(q) is dominated by the Hall viscosity term and shows the asymptotic behavior ω=𝒪(q^2). In contrast, in the absence of ν_H, the conventional bulk magneto plasmon dispersion is dominated by the v_s^2 q^2 term with ω=𝒪(q) when q→ + ∞. We focus on the transparent surrounding material with ε>0. Hence, the bandgap between bulk bands will be opened for all momentum q. The shape of the bulk magneto plasmon dispersion largely depends on the Hall viscosity ν_H and dielectric permittivity of the surrounding medium ε. We can define a unitless value ℬ to classify two different shapes of bulk bands: ℬ=(2 ν_H ω_c - v_s^2)^3 m^2 ε^2 /27 π^2 e^4 ν_H^2 n_0^2. For ℬ⩽ 1, bulk bands will monotonically increase with momentum |q|. For ℬ>1, bulk bands will have a Mexican hat shape. In Fig. <ref>, we show these two classes of bulk bands by magenta curves (Fig. <ref>(a-c,g-i) for ℬ⩽ 1, Fig. <ref>(d-f) for ℬ > 1). In the figures, q̃=q v_s/ω_c and ω̃=ω/ω_c are the unitless momentum and frequency normalized by characteristic parameters of the system. It is worth noting that for ℬ⩽ 1, the bandgap of bulk bands Δ=2ω_c is only determined by the external magnetic field B. For ℬ>1, the bandgap of bulk bands Δ<2ω_c is also controlled by material properties and permittivity of the surrounding dielectric material. §.§ Optical N-plasmons In this part, we demonstrate the nontrivial topological properties of optical N-plasmons. We compare the dispersions of optical N-plasmons and other topologically trivial edge states in 2D electron fluid. Conventional edge states are usually believed to depend on boundary conditions sensitively <cit.>. In contrast, we show that the topologically protected optical N-plasmons are not sensitive to boundary conditions at the 2D electron fluid boundaries. For the 2D electron fluid, fringing fields in the surrounding media and electron fluid boundary conditions complicate the edge problems significantly <cit.>. The fringing fields mediate the interactions between quasi-static charges in the 2D plane and contribute to an effectively non-local potential. We solve the edge problem with the non-local potential fully by numerical simulations in section <ref>. In this section, we adopt the Fetter approximation, which can provide accurate dispersions of edge states except in the long-wavelength limit (q → 0) <cit.>. Boundary conditions of the 2D electron fluid are microscopically determined by degrees of edge disorders and the mechanism of electron-boundary scattering, and can be characterized by the slip length l_s <cit.>. Many different factors, including charge density n_0, temperature, and smoothness of material boundaries, can influence l_s. In the low-loss limit, electron fluid boundary conditions can be written in terms of l_s <cit.>: [t̂·ς·n̂+t̂·𝐉/l_s]=0, ς=[ -∂_x J_y - ∂_y J_x ∂_x J_x -∂_y J_y; ∂_x J_x -∂_y J_y ∂_x J_y + ∂_y J_x ], where t̂, n̂ are the unit vectors in the tangential and normal directions. The two extreme cases of electron fluid boundary conditions, no-slip and no-stress, correspond to slip length l_s = 0 and l_s = ∞, respectively. These two regimes could happen for the viscous electron fluid when the electron-boundary scattering is diffusive (no-slip) or specular (no-stress). In the intermediate regime where 0<l_s<∞, the finite-slip boundary condition is appropriate, where part of electron momentum is lost in the electron-boundary scattering process. In Fig. <ref>, we show the dispersions of edge excitations when the viscous Hall electron fluid is in different topological phases and under various boundary conditions. The derivations of the edge state dispersion and the material parameters are given in Appendix <ref>. The bulk-boundary correspondence guarantees that for the topological phase characterized by N=2, optical N-plasmons always exist in the bandgap of bulk bands. Dispersion of optical N-plasmons is marked by cyan curves in Fig. <ref>(a - c) for bulk bands with ℬ⩽ 1 and in Fig. <ref>(d - f) for bulk bands with ℬ > 1. Optical N-plasmons can connect the bulk bands in both cases. Conventional edge states are usually considered to be sensitive to boundary conditions <cit.>. In contrast, as is shown in Fig. <ref>(a - f), the dispersion of the optical N-plasmon is independent of the fluid boundary conditions since it is protected by topology. This reveals the advantages of optical N-plasmons for practical applications in information technology, where stability is highly required. As a result, we simulate the performance of a circulator in section <ref> based on the optical N-plasmons. Apart from the optical N-plasmon, some other types of edge magneto plasmons (EMPs) can also exist in the bandgap under extreme boundary conditions (l_s=0 or ∞). These EMPs may also be chiral (CEMPs) and are marked by yellow solid and dashed curves in Fig. <ref>. It is worth noting that although optical N-plasmons are always chiral, CEMPs are not necessarily protected by topology. CEMPs exist in the bandgap due to the anomalous bulk-boundary correspondence under specific boundary conditions. It is related to the scattering of bulk modes at the boundary and “ghost edge modes” at infinite frequency <cit.>. Dispersions of CEMPs are very sensitive to boundary conditions. As is shown in Fig. <ref>(a, c, d, f), by continuously deforming the shape of bulk bands without closing the bandgap at any q̃ point, the group velocity of CEMP can be reversed, which is in contrast to the optical N-plasmon. The frequency windows where only optical N-plasmons can be excited are marked by the yellow-cyan regions. In Fig. <ref>(g - i), we present the dispersions of edge states for the topologically trivial N=0 phase. Here, optical N-plasmons do not exist. Despite some unidirectional frequency windows existing under extreme cases of boundary conditions (marked by yellow region), the dispersions of these CEMPs are not stable under varying boundary conditions. Furthermore, the bandgap of bulk bands can not be connected under all boundary conditions. This is because the bulk material is in a topologically trivial phase, and no edge state is protected by topology. Figure. <ref>(j) shows the dispersion of the conventional Fetter edge magneto plasmons <cit.> (FEMP) for ν_H=0. In this case, since the unbounded momentum space can not be compactified due to the absence of Hall viscosity ν_H, no topological interpretation exists for the bulk. As a result, FEMP is unidirectional but not topological and can not connect bulk bands. Hence, FEMP is not guaranteed to be immune to back-scattering at the boundary defects. As is shown in Fig. <ref>(k - n), optical N-plasmons (solid cyan curves) have distinct normal profiles compared with CEMP and FEMP (solid and dashed yellow curves). Here, δρ represents the charge density variation of different types of normalized edge states ψ̃. x̃=x ω_c/v_s is the normalized unitless distance from the fluid boundary. For optical N-plasmons, despite the confinement being related to the Hall diffusion length D_H=√(ν_H/ω_c) and may vary with Hall viscosity, δρ(x̃=0)=0 is always valid. For CEMP and FEMP, δρ(x̃=0)≠ 0. This difference is because the dispersion of optical N-plasmons is independent of while dispersions of CEMP and FEMP are sensitive to boundary conditions. With some algebra, we can prove that in the low-loss limit, δρ(x̃=0)=0 is a necessary and sufficient condition for ψ̃ to be independent of varying boundary conditions (see Appendix A). In the next section, we employ optical N-plasmons to design an ultra sub-wavelength broadband topological hydrodynamic circulator. § TOPOLOGICAL HYDRODYNAMIC CIRCULATOR §.§ Fringing fields and non-local in-plane potential For the 2D electron fluid confined in the z=0 plane, the fringing fields out of the plane introduce a non-local effect in the coupling between the charge density ρ and in-plane potential ϕ. The non-local coupling between ϕ and ρ confined in the 2D domain Ω and free charges ρ_f out of the plane is: ϕ(t,𝐫)=4π/ε∫_Ω d𝐫'G(𝐫,𝐫')ρ(t,𝐫')+ϕ_f(t,𝐫), where ϕ_f(t,𝐫)=4π∫ d𝐑_0 G(𝐫,𝐑_0)ρ_f(t,𝐑_0)/ε is the electric potential generated from the free free charges ρ_f, ε is the effective permittivity of the surrounding medium, 𝐫, 𝐫' denote the 2D coordinates, 𝐑_0 denotes the 3D coordinates, G is the scalar Green's function, G(𝐫,𝐫')=1/4π |𝐫-𝐫'|. Here, in contrast to the 3D case, for the 2D electron fluid, no simple differential operator with respect to the 2D coordinates 𝐫 can relate ϕ and ρ locally <cit.>. In this section, we develop an electromagnetic-hydrodynamic simulation to solve the coupled Eqs. (<ref>) and (<ref>). §.§ Graphene-based topological hydrodynamic circulator Optical N-plasmons at the edge of the viscous Hall electron fluid in the N=2 phase are fundamentally protected by the topology and are robust against fluctuations. As a result, it is well suited for applications in information processing. In this section, we propose the design of an ultra sub-wavelength broadband topological hydrodynamic circulator based on the optical N-plasmons in graphene. The schematic of the 3-port circulator design is demonstrated in Fig. <ref>(a). Graphene with the Y-shape circulator geometry (gray region) is on top of the isotropic dielectric material with permittivity ε_b (blue bulk). In this case, effective permittivity ε=ε_b/2. Graphene is required to be ultra-clean so that the interacting electrons can be described by the hydrodynamic flow model. A static external magnetic field is applied in the graphene region, and Hall viscosity can emerge in the system since the time-reversal symmetry and parity symmetry are broken. For repulsive Hall viscosity (ν_H ω_c >0), viscous Hall electron fluid in graphene will be in the topological N=2 phase. Three oscillating electric dipoles with oscillation frequency ω_s are placed on top of each port. These dipoles are used to excite optical N-plasmons in the circulator. Hence, ω_s is considered to be in the bandgap of bulk bands. The possible boundary defects of the circulator are captured by a sharp corner in port 2. Since optical N-plasmons are unidirectional and immune to back-scattering, the topological circulation behavior from port 1 → 2 → 3 will not be interfered with by the boundary defect. Reversing the direction of the magnetic field realizes the topological phase transition into N=-2, and the circulator will have an opposite circulation direction port 3 → 2 → 1 accordingly. We employ the finite element method to simulate the topological hydrodynamic circulator in the time domain and demonstrate the topological circulation behavior of optical N-plasmons in Fig. <ref>(b). We also provide a supplementary video generated from the electromagnetic-hydrodynamic simulations. The graphene region is described by Eqs. (<ref>,<ref>) and a finite slip boundary condition 0<l_s<∞ is applied at the boundary of graphene. In the simulations, we employ experimental graphene parameters (Appendix. <ref>) in the low-loss limit with ℬ<1 and consider a high index substrate with ε=50 under graphene. An external magnetic field B=2 T is applied in the graphene region with a port width of 329 nm. The three dipoles on top of each port with oscillation frequencies ω_s=ω_c/2 contribute to ϕ_f in Eq. (<ref>). Their projections in the graphene plane are marked by red stars. Inside the graphene region, normalized charge density variations δρ are represented by the colorbar. From the simulations, it is clear that the excited optical N-plasmons at red stars will flow unidirectionally from port 1 → 2 → 3. Optical N-plasmons cross the sharp corner in port 2 smoothly without back-scattering. In Fig. <ref>(c), we show the dispersion relation (cyan curve) of optical N-plasmons with the graphene parameters considered in our simulations. We mark the optical N-plasmons excited by the oscillating dipoles in our simulations with the red star in the frequency and momentum space. In Fig. <ref>(d), we show the normal profile of the normalized edge excitation at t̃≈23 and compare the charge density variations along cut line segment ℓ in port 1 (ℓ is marked by the magenta line in Fig. <ref>(b)). x̃ represents the distance from the fluid boundary. t̃=t ω_c is the unitless time normalized by the characteristic timescale of the system. The simulation results in each mesh along ℓ match the theory predictions (cyan curve) well. The small deviations at larger x̃ are related to coarser meshes in that region. δρ(x̃=0)=0 shows that the excited edge states in Fig. <ref>(b) are optical N-plasmons protected by topology instead of other types of chiral edge states. In Fig. <ref>(e), we study δρ(t) at points A and B and show the corresponding charge density variations (points A and B are marked by the magenta dots in Fig. <ref>(b)). The theory curves are plotted by fitting the simulation results at point A using a sine curve with period 2π/ω and translationally shifting it by d_AC/v_p to get theory results at point B. We can see that the frequency ω and propagation velocity v_p of the simulated optical N-plasmons match with their theoretical counterparts. The performance of a circulator can be evaluated based on many aspects, including the form factor, isolation, and bandwidth. Form factor ℱ indicates the relative size of a circulator with respect to its working frequency range. In our design, ℱ is determined by the confinement of optical N-plasmons D_H=√(ν_H/ω_c) and can be defined as the ratio between port width d and vacuum wavelength corresponding to the frequency ω_s. For the simulated circulator performance in Fig. <ref>(b), ℱ≈ 2.5× 10^-3, revealing that this topological hydrodynamic circulator design is ultra-compact. Since no back-scattering is allowed by the topology and unidirectional optical N-plasmons are the only allowed state in the bandgap, this topological circulator should possess much larger isolation compared with other designs based on topologically-trivial edge states. The bandwidth of this topological circulator is determined by the bandgap of bulk bands. With an external magnetic field B=2 T, bandwidth BW=Δ=2 ω_c ≈ 4.5 THz is ultrawide. It is worth noting that the performance of this design, including form factor, bandwidth, and response speed, can be effectively tuned and optimized by changing the external magnetic field or surrounding media (see section <ref>). All the discussions above show that the proposed design is ultra sub-wavelength, broadband, tunable, and can operate in the THz range. This reveals that the topological hydrodynamic circulator can play an important role in next-generation information routing and interfacing quantum-classical computing systems. § CONTACT-FREE OPTICAL N-PLASMON CONTROL WITH NEIGHBORING DIELECTRIC ENVIRONMENT Although fringing fields in the surrounding medium can cause intrinsic non-locality in Eq. (<ref>) and complicate the problem greatly, they can offer new flexibility to tune optical N-plasmons. The existence of optical N-plasmons is guaranteed by topology regardless of the neighboring dielectric materials, but it is possible to exploit the surrounding medium to tune and optimize the optical N-plasmon properties in nano-devices without introducing electrical contacts in the viscous electron fluid. In this section, we study the influence of the surrounding dielectric environment on optical N-plasmons and the topological hydrodynamic circulator. We consider that the 2D graphene viscous electron fluid in the N=2 phase is on top of the isotropic transparent material with positive dielectric constant ε. The bandgap of bulk bands is always connected by optical N-plasmons at the edge. In Fig. <ref>(a), we show that the confinement of optical N-plasmons is not sensitive to ε. We demonstrate that the charge density variations corresponding to the normal profiles of the normalized optical N-plasmon states δρ(x̃) at ε=10, 30, 100 are similar. This is because the confinement of δρ(x̃) is determined by the Hall diffusion length D_H=√(ν_H/ω_c) independent of ε. In Fig. <ref>(b), we present that the group velocity v_g=v_s d ω̃/d q̃ of optical N-plasmons can be effectively tuned by ε. By changing ε from 5 to 100, group velocity v_g of the optical N-plasmon with ω̃=0.5 can be modulated by a factor of 10. It is worth noting that v_g > v_s=v_F/√(2) and can approach v_s asymptotically in the large ε limit. For the topological circulator design, the stable confinement of optical N-plasmons reveals that the circulator can always be ultra-compact, and the controllable v_g indicates a tunable response speed of the topological circulator. § CONCLUSION To summarize, we introduce the optical N-plasmon, which is the topologically protected edge excitation of the two-dimensional hydrodynamic electron flow with repulsive Hall viscosity. Optical N-plasmons are fundamentally different from conventional chiral/Fetter EMPs in three aspects: dispersion relations, stability with respect to edge disorders, and edge profiles. We propose an ultra sub-wavelength broadband topological hydrodynamic circulator based on optical N-plasmons, which is a chiral quantum radio-frequency circuit component crucial for information routing and interfacing quantum-classical computing systems. The topological circulator has a robust performance when boundary defects and edge disorders are present. The simulated optical N-plasmons circulating in the circulator ports show a good match with the theory. We demonstrate that group velocities of optical N-plasmons can be tuned in a contact-free manner by controlling the fringing fields in neighboring dielectric materials. Our work provides an experimental signature of repulsive Hall viscosity and opens practical applications of the new topological electromagnetic phase of two-dimensional materials. Moreover, the compact, tunable, topologically protected optical N-plasmons can have further applications in various fields, including graphene plasmonics <cit.>, plasmonic metamaterials <cit.>, nonreciprocal quantum devices <cit.>. § ACKNOWLEDGEMENTS This work was supported by the Defense Advanced Research Projects Agency (DARPA) under Nascent Light-Matter Interactions (NLM) program and U.S. Department of Energy (DOE), Office of Basic Sciences under DE-SC0017717. § EDGE EXCITATIONS OF THE HYDRODYNAMIC ELECTRON FLUID In this appendix, we provide solutions to the edge excitations of the hydrodynamic electron flow model based on the Fetter approximation <cit.>. Assuming the hydrodynamic electron fluid has the half plane geometry in the x>0 region. The propagating edge modes have the form f(x,y,t)=f(x)e^i(qy-ω t), where f can be the charge density variation δρ or the 2D current density 𝐉. As we have shown in Eq. (<ref>), the fringing fields out of the electron fluid plane introduce intrinsic non-locality in the electromagnetic potential ϕ. Instead of solving this complex non-local problem for edge state dispersion, we consider an approximate integral kernel that makes the Poission's equation effectively local <cit.>. The non-local integral form of Poission's equation can thus be replaced by the differential equation: ∂^2 ϕ(x)/∂ x^2 - 2q^2 ϕ(x) = 4π |q| ρ(x)/ε. This Fetter approximation provides accurate dispersion relations for our study except in the long-wavelength limit (q → 0), where the asymptotic behavior of the exact solution can not be recovered <cit.>. Combining Eq. (<ref>) and Eq. (<ref>), the dispersions ω=ω(q) of edge modes of the form e^i(qy-ω t) are given by the following coupled equations <cit.>: -α^4 q̃'̃^6+(2α^2-1-α^4 q̃^2)q̃'̃^4 + (ω̃^2-ω̃_̃b̃^2-Ω̃_̃p̃^2+α^4 q̃^4)q̃'̃^2 + q̃^2 (ω̃^2-1) = 0, [F_ij]_3 × 3 = 0, where [F_ij] is a 3×3 matrix corresponding to boundary conditions: F_1j=√(2)|q̃| + η_j, F_2j=[ω̃η_j - (1-α^2q̃'̃_̃j̃^2)q̃] (q̃'̃_̃j̃^2+q̃^2) / q̃'̃_̃j̃^2, F_3j^0=[ω̃q̃ - (1-α^2q̃'̃_j^2)η_j] (q̃'̃_̃j̃^2+q̃^2) / q̃'̃_̃j̃^2, F_3j^+∞=( 2q̃[ω̃q̃ - (1-α^2q̃'̃_j^2)η_j] - ω̃q̃'̃_j^2 ) (q̃'̃_̃j̃^2+q̃^2) / q̃'̃_̃j̃^2, where α=√(ω_cν_H)/v_s is a unitless constant determined by the electron fluid. ω̃=ω/ω_c and q̃=v_s q/ω_c are the normalized frequency and momentum. Equation (<ref>) is a cubic equation with respect to q̃'̃^2, and q̃'̃_̃ĩ^2 is the ith root of Eq. (<ref>). η_i=√(q̃^2-q̃'̃_̃ĩ^2) with Re η_i ⩾ 0. Here, plasma frequency Ω̃_̃p̃=β√(|q̃|), bulk plasmonic band dispersion ω̃_̃b̃^2=(1-α^2 q̃^2)^2+q̃^2+β^2|q̃| and β=√(2π e^2 n_0/(m εω_c v_s)). When the electron fluid boundary condition is no-slip with l_s=0 or no-stress with l_s=+∞, F_3j=F_3j^0 or F_3j=F_3j^+∞ respectively. For finite slip boundary condition, F_3j=F_3j^+∞ + κ F_3j^0, where κ is a constant determined by l_s. In Fig. <ref>(a-c), we consider α=0.6817, β = 1.5263 corresponding to the monolayer graphene experimental parameters in Table <ref>. In Fig. <ref>(d-f), we consider α=3.9358, β = 1.5263 for the ℬ>1 case. In Fig. <ref>(g-i), we consider α=0.6817i, β = 1.5263 for the topologically trivial N=0 phase. In Fig. <ref>(j), we consider ν_H=0 and other paramters are the same as the monolayer graphene parameters. In Fig. <ref>, we consider the monolayer graphene parameters, and β is mainly determined by the neighbouring medium permittivity ε. The solutions to δρ (x) of edge modes are: δρ (x) ∝Σ_i (q̃'̃_̃ĩ^2+q̃^2) ϕ_i e^-η_i x̃, where ϕ_i satisfies Σ_j F_ijϕ_j =0. From Eq. (<ref>), δρ (x̃=0) ∝Σ_i (q̃'̃_̃ĩ^2+q̃^2) ϕ_i. In the low-loss limit, we can find that if there exists an edge mode satisfying all three kinds of electron fluid boundary conditions, δρ (x̃=0) = 0 since Σ_i (q̃'̃_̃ĩ^2+q̃^2) ϕ_i is a linear combination of Σ_i F_3i^0 ϕ_i and Σ_i F_3i^+∞ϕ_i. Correspondingly, if a propagating edge mode satisfies δρ (x̃=0) = 0, then this mode can exist under all three kinds of electron fluid boundary conditions. This can also be understood from the continuity equation only. Combining the Fourier transform of Eq. (<ref>) with respect to x multiplied by k (momentum corresponding to x), δρ (x̃=0) ∝lim_|k|→∞ k f(k) <cit.>, and the electron fluid boundary conditions, we can reach the same argument in the previous paragraph. § SIMULATION DETAILS In this appendix, we present the graphene parameters employed in the topological hydrodynamic circulator simulations. We also provide a supplementary video for the time-domain simulations of optical N-plasmons in the supplementary materials. Fig. <ref>(b) corresponds to the simulation results at t̃≈ 34. Here, we also demonstrate that the topological circulator will also have a robust performance in the presence of large dissipation. We consider the normal viscosity ν=9.9× 10^-4 m^2/s and damping rate γ̃=γ / ω_c = 0.0176. In this case, we simulate the performance of the topological hydrodynamic circulator and show the results in Fig. <ref>. Boundaries of the circulator are taken to have finite slip length. Here, we can find that although the optical N-plasmons experience dissipation in the propagation process, it is still unidirectional and immune to back-scattering at the boundary defect in port 2. A video for the simulations of optical N-plasmons with large dissipation is also provided in the supplementary materials. § OPTICAL N-PLASMONS IN THE OPAQUE SURROUNDING MEDIUM In this appendix, we consider the dielectric materials surrounding the electron fluid in N=2 phase to be opaque with negative dielectric constant ε. Distinct from the transparent medium case where the bulk bandgap is always open for all q̃, the bulk bandgap can be closed at some q̃ points for negative ε with small |ε| values and reopened at all q̃ when |ε| is large. We distinguish these two regimes by ε<0 and ε≪ 0. In Fig. <ref>(a,b), we show the dispersions of bulk magneto plasmons (magenta curve) and optical N-plasmons (cyan curve) and the profile of δρ(x̃) in the ε<0 case. It is worth noting that optical N-plasmons will persist in the bandgap close to q̃=0 points with a reversed direction. It is hard to excite the optical N-plasmons alone in this case since its frequency range is embedded in bulk bands. The reversal of the optical N-plasmon propagation direction is fundamentally different from the reversal of CEMP directions discussed in section <ref>, where bandgap closure never happens at any q̃ point. For the ε≪ 0 case, as is shown in Fig. <ref>(c,d), the bulk bandgap is reopened, and optical N-plasmons will have the same direction as in the ε > 0 case.
http://arxiv.org/abs/2307.00620v1
20230702172140
Polarization of recoil photon in non-linear Compton process
[ "A. I. Titov" ]
hep-ph
[ "hep-ph" ]
Bogoliubov Laboratory of Theoretical Physics, Joint Institute for Nuclear Research, Dubna 141980, Russia Email:[email protected] The polarization of recoil photon (γ') in the non-linear Compton process e + L⃗→γ⃗' +e' in interaction of a relativistic electron with a linearly polarized laser beam ( L⃗) is studied. In particular, we consider the asymmetry A of differential cross sections for two independent axes describing the Compton process, which determine the polarization properties of the recoil photon. The sign and absolute value of the asymmetry determine direction and degree of γ' polarization. We have analyzed the process in a wide range of laser intensity, which covers existing and future experiments. Our results provide additional knowledge for studying non-linear multi-photon effects in quantum electrodynamics and can be used in planning experiments in envisaged laser facilities. 12.20.Ds, 13.40.-f, 23.20.Nx Polarization of recoil photon in non-linear Compton process. A. I. Titov August 1, 2023 ================================================================= § INTRODUCTION Upcoming and planned experiments combining increasingly intense lasers and relativistic particle beams will access new regimes of non-linear relativistic quantum electrodynamics, which attract attention of many researchers from theory and experiment. An excellent analysis of theoretical achievements and possible expectations from new facilities is given in recent review  <cit.>, see also <cit.>. Important parts of these studies are the non-linear Compton (nlC) and non-linear Breit-Wheeler (nlBW) processes, where a probe particle, electron or photon, respectively, interacts with a high-intensity background electromagnetic field. Both processes have been thoroughly researched theoretically in the past and then re-considered and improved  <cit.>. In most cases, the field of a high-intensity optical laser is considered as the background field. For example, the famous SLAC experiment E-144 <cit.>, envisaged European LUXE <cit.>, E-320 at FACET II/SLAC FACET-II <cit.> projects. For X-ray photon beam experiments (XFEL), see <cit.> In nlBW, the formation of an electron-positron pair using an optical laser pulse of several eV requires an initial photon with a frequency of tens of GeV. It can be a recoil photon (γ') produced in the interaction of a relativistic electron with an optical laser beam in a nonlinear multi-photon Compton process. Considering such a process with an emphasis on γ' polarization is the main topic of the present work. We use the following conventions: p(E_e,p⃗) and p'(E'_e,p⃗') are four-momenta of incoming and outgoing electrons, respectively. The four-momentum of the outgoing photon with frequency ω' is k'(ω',𝐤'), 𝐤'=ω'(𝐱sinθcosφ +𝐲sinθsinφ+𝐳cosθ), where θ and φ are the corresponding polar, and azimuthal angles, respectively. The electromagnetic field intensity is described by the dimensionless parameter ξ=|e| E/(mω), where E is the electric field strength, ω≡ω_γ is the the central laser frequency, where -|e| and m stand for electron charge and mass, respectively, the corresponding four momentum of beam photon k(ω,00𝐳ω). We also use a quantum non-linearity parameter χ=ξ(k· p)/m^2. The beam polarization is assumed to be along the 𝐱 axis. We use natural units with c=h=1, and e^2/4π=α≈ 1/137.036. In case of linearly polarized initial photons, the yield of the pair in nlBW depends on kinematics (total energy in c.m s.), ξ, and the mutual polarization of the initial photons. For example, at ultra-high field intensity with ξ≫1, the asymmetry of yields when the polarizations of initial photons are mutually perpendicular W_⊥ or parallel W_∥, A=(W_⊥-W_∥)/(W_⊥+W_∥) varies from 1/3 to 1/5 depending on kinematic conditions <cit.>. At low field strength with ξ≲ 1, the asymmetry A exhibits some non-monotonic behavior, varying from zero to one depending on kinematics <cit.>. Therefore, the mutual polarization of two linearly polarized initial photons is an important component of the nlBW process. In addition to the direct connection to the Breit-Wheeler process, study of the polarization of the Compton recoil photons is of independent importance as a source of additional information on the dynamics of non-linear multi-photon processes. Methodologically, the study of the polarization of a recoil photon is close to <cit.>, where the crossed process nlBW with linearly polarized photons in the initial state in a wide range of e.m. strengths is considered. Together with the plane wave analysis we will study effect of the finite pulse, where the linearly polarized background field is determined by the e.m. four-potential in the axial gauge A=(0,𝐀) as 𝐄=-∂𝐀/∂ t: 𝐀(ϕ) = f(ϕ) [ 𝐚cos(ϕ)] . The quantity ϕ=k· x is the invariant phase with four-wave vector k=(ω, 𝐤), obeying the null field property k^2 = k · k=0 implying ω = |𝐤|, 𝐚≡𝐚_x; |𝐚|^2=a^2; transversality means 𝐤𝐚=0 in the present gauge. For the sake of definiteness, the envelope function f(ϕ) is chosen as hyperbolic secant: f(ϕ)=1/[coshϕ/Δ] . The dimensionless quantity Δ is related to the pulse duration 2Δ=2π N, where N has the meaning of the number of cycles in the laser pulse. It is related to the time duration of the pulse τ_N=2N/ω (for the dependence of some observables on the envelope shape, see, for example <cit.>). The cross section of Compton process includes a normalization factor N_0, which is related to the average square of the e.m. strength and is expressed through the envelope functions as N_0 = 1/2 π∫_-∞^∞ dϕ(f^2(ϕ)+f'^2(ϕ)) cos^2ϕ with the asymptotic value N_0 ≈Δ / 2 π at Δ / π≫ 1. We consider the nlC process for linearly polarized recoil photons with polarization axes 𝐞'_i, i=1,2, The vectors 𝐞'_1,2 are mutually orthogonal and orthogonal to 𝐤': 𝐞'_1,2,⊥𝐤' The amplitude of the process M depends on e'_i: M_i=M(e'_i). The sum |M_1|^2+|M_2|^2 determines the total probability of the process, while the difference |M_1|^2-|M_2|^2 leads to the asymmetry A_12≡|M_1|^2 - |M_2|^2/|M_1|^2 + |M_2|^2 . The sign A_12 indicates the direction of γ'-polarization relative to the 𝐞'_1,2 axes, and the degree of polarization P_1,2 associated with asymmetry as P_1,2=1± A_12/2 . The natural choice of the axes 𝐞'_i read <cit.> 𝐞'_1=[𝐤𝐤']/|[𝐤𝐤']|, 𝐞'_2=|𝐤' 𝐞'_1|/|𝐤'| , leading to 𝐞'_1 = -𝐱sinφ + 𝐲cosφ 𝐞'_2 = -𝐱cosθcosφ -𝐲cosθsinφ +𝐳sinθ . The physical meaning of 𝐞'_i becomes clear at backward scattering with cosθ=-1. In a coplanar frame with φ=0, 𝐞'_1 is parallel to the 𝐲 axis, i.e. perpendicular to the polarization of the beam, while 𝐞'_2 is parallel to the 𝐱 axis or the beam polarization. Our work is organized as follows. In Section II, we present the basic formulas for cross sections and asymmetries. The cases ξ≤ 1 and ξ≫1 are discussed in Sections III and IV. Our summary is given in Section V. § THE CROSS SECTIONS AND ASYMMETRIES As mentioned above, we consider essentially multi-photon events, when a large number ⩾1 of laser photons participate simultaneously in the Compton process. Here we present the main formulas for the cross sections and asymmetries that we use in our consideration. We analyze them as functions of the frequency ω' in a wide range of ξ. The energy of the initial electron is chosen in accordance with the LUXE kinematics <cit.> E_e=17.5 GeV. The exception is the case of very small ξ=0.01 considered in subsection III A, where for the sake of completeness we also consider E_e=8 and 45 GeV. The frequency of the optical laser pulse is ω=ω_γ=1.55 eV. The differential cross section of the photon emission in multi-photon Compton scattering can be expressed as a function of the frequency ω' as follows <cit.> d^2 σ_i = 2α^2dφ dω'/ξ m^2 χ N_0∑_ℓ=1^∞1/|p-ℓω| [|p· e”_iA_0 + ea· e”_iA_1|^2 + ξ^2u^2/4(1+u)(A_1^2-A_0A_2)] , where ω' and φ are the frequency and the azimuthal angle of the recoil photon γ', respectively, ℓ is the number of beam photons involved in the process, the flux factor N_0 for monochromatic plane wave N_0=1/2. For the finite pulse, it is determined by the expression (<ref>). The invariant variable u=k· k'/k· p' is the function of ℓ and ω'. The auxiliary vectors e”_i are related to the polarization vectors e'_i in the equation (<ref>) as e”_i=e'_i - k'(k· e '_i/k· k'), and we assume p⃗↑↓k⃗. The basic functions A_m≡ A_m(ℓ,α,β) (do not confuse α with the fine structure constant) are determined as A_m(ℓαβ)=1/2π∫_-π^π dϕcos^m(ϕ) e^iℓϕ - iαsinϕ + iβsin2ϕ with α = zcosφ, β=ξ^3u/8χ , z = 2ℓξ/√(1+ξ^2/2)√(u/u_l(1-u/u_l)), u_l=2lχ/ξ(1+ξ^2/2). The cross sections for fixed 𝐞'_1 and 𝐞'_2 read d^2 σ_1 = 2α^2dφ dω'/ξχ m^2 N_0∑_l=1^∞ 1/|p-lω| [ξ^2 A_1^2sin^2φ + ξ^2u^2/4(1+u)(A_1^2-A_0A_2)] , d^2 σ_2 = 2α^2 dφ dω'/ξχ m^2 N_0∑_ℓ=1^∞ 1/|p-lω| [-A_0^2 -ξ^2 A_1^2sin^2φ + ξ^2 (1+u^2/4(1+u)(A_1^2-A_0A_2)] . The sum d^2σ≡ d^2σ_1+d^2σ_2 (with N_0=1/2) leads to well known expression for the unpolarized cross section d^2 σ = 4α^2 dφ dω'/ξ^2 k· p∑_l=1^∞ 1/|p-lω| [-A_0^2 + ξ^2 ((1+u^2/2(1+u)(A_1^2-A_0A_2))] . The difference in d^2σ_1,2 results in asymmetry (cf. Eq. (<ref>)) A_12(φ,ω')=d^2σ_1 - d^2σ_2/d^2σ . The sign of A_12-plus or minus is related to the direction of γ'-polarization relative to the axes 𝐞'_1 or 𝐞'_2, respectively. The degree of polarization is determined according to Eq. (<ref>). We also consider the asymmetry between cross sections dσ_i/dω', integrated over the azimuthal angle φ A_12(ω')=dσ_1 - dσ_2/dσ . Our analysis shows that A_12(ω') qualitatively and quantitatively is close to the averaged asymmetry ⟨ A_12(ω')⟩_φ=1/2π∫_0^2π dφ A_12(φ,ω') . In conclusion, we note that in the general case, the polarization of a recoil photon along the 𝐞_1,2 axes is a mixture of polarizations along the 𝐱 or 𝐲 axes. Only in coplanar geometry with φ=0, 𝐞_1∥y and 𝐞_2∥x and hence A_12(φ=0,ω')= dσ(⊥)-dσ(∥)/dσ(⊥) + dσ(∥) . In other words, in coplanar frame A_12 has a sense of the asymmetry of the cross sections with recoil photons polarized perpendicular or parallel to the beam polarization. § FINITE FIELD STRENGTH (Ξ≤ 1) §.§ Low field strength with ξ≪ 1 For methodological purposes, we consider a small field strength with ξ≪ 1, where the lowest harmonic with l=1 dominates. The functions A_m(l=1,α,β) for small values of the parameters α∼ξ, β∼ξ^2 read A_0≃α/2(1+β/2), A_1≃1/2 -α^2/16-β/4, A_2≃α/8(1+β). In the frame with p⃗=0 the cross section is dσ/dΩ =σ_0 (ω'/ω)^2 (ω'/ω+ω/ω' -2sin^2θcos^2φ) , where we denote σ_0=α^2/2m^2 and use identity dω'/|p-ℓω|=ω'^2dcosθ/ ℓ k· p. Averaging over φ leads to the well-known Klein-Nishina cross section. The partial cross sections read dσ_1/dΩ=σ_0(ω'/ω)^2 (1/2(ω'/ω + ω/ω') + 2sin^2φ -1) , dσ_2/dΩ = σ_0(ω'/ω)^2 (1/2(ω'/ω + ω/ω') - 2sin^2φ+1. - . 2sin^2θcos^2φ) , which leads to the asymmetry A_12 = 22sin^2φ +sin^2θcos^2φ -1/ω'/ω + ω/ω' - 2sin^2θcos^2φ. On the other hand, the cross section of e+γ⃗→ e'+γ⃗' reaction has the form <cit.> dσ_i/dΩ=1/2σ_0(ω'/ω)^2 (ω'/ω + ω/ω' - 2 + 4(𝐞_0𝐞_i)^2) , where 𝐞_0 and 𝐞_i are the polarization vectors of γ and γ', respectively. Taking 𝐞_0=𝐱 and 𝐞_i in the form (<ref>), we obtain expressions for the partial cross sections and asymmetries in the form of equations (<ref>) - (<ref>). In coplanar geometry with φ=0, the asymmetry is proportional -cos^2θ, i.e. negative. This means that the polarization of the recoil photon is directed along the 𝐞_2 axis, or along the beam polarization. The frequency of outgoing photon at fixed ℓ is limited by ω'_ max(ℓ), ω'_ max(ℓ)=ℓ k· q/q_0+|q⃗|cosθ +ℓω(1-cosθ) . Note that for the chosen kinematics, almost backward Compton scattering takes place with θ≃π. The differential cross sections dσ_i/dω', dσ/dω', and the double differential cross sections d^2σ/dφ dω', d^2σ/dφ dω' are exhibited in Fig. <ref>, The top, middle, and bottom panels are for the electron energy E_e=8, 17.5, and 45 GeV, respectively. The beam frequency ω_γ=1.55 eV, ξ=0.01. The left and right panels are for the differential dσ/dω' and double differential cross sections in a coplanar frame d^2σ/dφ dω'(φ=0), respectively. One can see that in region ω'<ω'_ min(ℓ=1) dσ_1/dω'>dσ_2/dω' and d^2σ_1/dφ dω'(ϕ=0) <d^2σ_2/dφ dω'(φ=0), which leads to the positive and negative asymmetries A_12(ω') and A_12(φ=0,ω'), respectively, exhibited in the Fig. <ref>. The asymmetry A_12(ω') has a bump-like behavior. Position of the bump shifts towards larger values of ω'/E_e as E_e increases. If ω'≃ω'_max(ℓ=1) the cross sections dσ_1/dω' and dσ_2/dω' are close to each other and A _12(ω')≃ 0. When ω' slightly exceeds ω'_ max, then dσ_1 > dσ_2. This results in some "horn" in the asymmetry. For larger values of ω' the cross section dσ_ 2 exceeds dσ_ 1, and the asymmetry becomes negative. The double differential cross section in a coplanar frame d^2σ_2 is greater than d^2σ_1, which leads to negative asymmetry and γ' is basically polarized parallel to the beam polarization. §.§ Plane wave approximation for 0.1≤ξ First of all, it should be noted that the result is qualitatively insensitive to the electron energy in the range of E_e=8...50 GeV, therefore, for simplicity and without loss of generality here and below we restrict ourselves to the electron energy E_e =17.5 GeV. Further, in the considered kinematics with 0<ω'<E_e, the cross sections decrease by more than 12 orders of magnitude, and it is difficult to see the difference between σ_1 and σ_2 on a logarithmic scale. Therefore, for simplicity, we restrict ourselves to the illustration of the differential cross sections dσ_i/dω' integrated over the azimuthal angle. The difference in partial cross sections manifests itself in the asymmetries, see (<ref>)-(<ref>). The cross sections and asymmetries for 0.1≥ξ≥ 1 in a plane wave approximation are calculated using Eqs. (<ref>)-(<ref>). The differential cross sections dσ_i/dω' for ξ=0.1, 0.5, and 1 are exhibited in Figs. <ref> in the top, middle, and bottom panels, respectively. The differential cross sections dσ_1/dω', dσ_2/dω' and dσ/dω' are shown by the red dashed-dot, green dashed, and blue solid curves, respectively. The cross sections have a step-like behavior, which reflects the partial contribution of harmonics with different ℓ in (<ref>). The height of a step is proportional to ξ^2. The asymmetries are displayed in the right panels. The solid black curves are for A_12(ω') (<ref>), the dashed blue curves are for asymmetries in coplanar geometry A_12(φ=0,ω'), the dot-dashed red curves are for ⟨ A_12⟩_φ (<ref>). The asymmetries A_12(ω') and ⟨ A_12⟩_φ are close to each other. They have bump-like behavior at low ω'/E_e, then become highly oscillated function of ω', vanishing at ω'→ E_e. Asymmetries in coplanar geometry with φ=0 are negative, which means that the γ' polarization is aligned parallel to the 𝐞_2 axis, or mostly parallel to the beam polarization. §.§ The finite pulse Considering the nonlinear Compton process for a finite linearly polarized laser pulse with e.m. four-potential (<ref>), we follow <cit.>, which is a generalization of our approach for circular polarization, see for example <cit.> and references therein. In this case, the differential cross sections d^2σ_i have the form of the equation (<ref>) with replacement of ∑_ℓ by ∫_ℓ_ min^∞dℓ, where the lower limit of the integral is ℓ_ min=um^2/2k· p. For completeness, we present the expressions for the partial cross sections in explicit form d^2 σ_1 = 2α^2dφ dω'/ξχ m^2E_e N_0∫_ℓ_ min^∞ dℓ [ξ^2 A_1^2sin^2φ + ξ^2u^2/4(1+u)(A_1^2 -A_0A_2)] , d^2 σ_2 = 2α^2 dφ dω'/ξχ m^2E_e N_0∫_ℓ_ min^∞ dℓ [-A_0^2 -ξ^2 A_1^2sin^2φ + ξ^2 (1+u^2/4(1+u)(A_1^2 -A_0A_2)] , where N_0=Δ(1 + 1/3Δ^2)/2π, Δ=π N, and we use |p⃗|≃ E_e≫ω. The basic functions A_m read A_m(ℓ) =1/2π∫_-∞^∞dϕ f^m(ϕ) cos^m(ϕ) e^iℓϕ -i P(ϕ) with P(ϕ) = α̃(ϕ)- β̃(ϕ) , α̃(ϕ)=α∫_-∞^ϕdϕ' f(ϕ')cosϕ' , β̃(ϕ)=4β∫_-∞^ϕdϕ' f^2(ϕ')cos^2ϕ' , z=2ξℓ√(u/u_ℓ(1-u/u_ℓ)) , u_ℓ=2ℓχ/ξ , where α and β are determined in (<ref>). The function A_0(ℓ) is regularized using the prescription of <cit.> which leads to identity ℓA_0(ℓ) = αA_1(ℓ) - 4βA_2(ℓ) . The cross sections and asymmetries for the finite pulse are exhibited in Fig. <ref>. We restrict ourselves by the number of oscillations in a pulse N=2 and 5 with field strength ξ=0.1, 0.5, and 1. The cross sections are qualitatively very close to that in the plane wave approximation shown in Fig.<ref>, being smoother functions of ω'. The corresponding asymmetries are exhibited in  Fig.<ref>. Similar to plane wave analysis, asymmetries ⟨A⟩_φ and A(ω') are close to each other; therefore, for simplicity, we present only the latter. Again, at low ω'/E_e asymmetries A(ω') are positive with a bump-like behaviour at ω'/E_e≈ 0.2, at larger ω' they change to negative, being highly oscillating functions of ω'/E_e, and vanishing at ω'→ E_e. Asymmetries in coplanar geometry A_12(φ=0,ω') are mostly negative. § ULTRA-LARGE PULSE INTENSITY, Ξ≫1 At large values of ξ≫1, the main contribution to the cross sections comes from the central part of envelope and for analysis one can use formalism developed by Nikishov and Ritus <cit.>. In this case, the partial sections dσ_i have the following form d σ_1/dω' = 4α^2ξ/m^2χ^2 E_e∫_0^π dψ∫_-∞^∞dτ usinψ [ξ^2 A_1^2sin^2φ + ξ^2u^2/4(1+u)(A_1^2 -A_0A_2)] , dσ_2/dω' = 4α^2ξ/χ^2m^2 E_e∫_0^π dψ∫_-∞^∞dτ usinψ [-A_0^2 -ξ^2 A_1^2sin^2φ + ξ^2 (1+u^2/4(1+u)(A_1^2-A_0A_2)] . The bi-linear combinations of A_k are expressed in terms of the Airy functions Φ and their derivatives Φ' A_0^2 = g^2/2π^2Φ^2(y) , g^2=4/ξ^2sin^2ψσ/y, A_1^2 = g^2/2π^2(ρ^2Φ^2(y) +σ/ξ^2yΦ'(y)) , A_0A_2 = g^2/2π^2(ρ^2-σ/ξ^2)Φ^2(y) , where the auxiliary variables ρ, σ and the argument of the Airy functions y read ρ^2 = cos^2ψ , sin^2φ=τ^2/τ^2 + ρ^2ξ^2 σ = 1+τ^2, y=(u/2χsinψ)^2/3σ . Evaluating d^2σ/dφ dω' we use identity ∫_0^π dψ∫_-∞^∞ dτ ξ^3usinψ/χ=∫_0^2πdφ∫_ℓ_ min^∞ dℓ , replacing τ^2 = (1+1/2ξ^2)(u_l/u-1)sin^2φ, ξ^2ρ^2 = (1+1/2ξ^2)(u_l/u-1)cos^2φ with u_ℓ=2ℓχ/ξ(1+ξ^2/2). The differential cross sections dσ_i/dω' and asymmetries A_12 as the functions of ω'/E_e are exhibited in Fig. <ref> in the left and right panels, respectively. The top, middle, and bottom panels are for ξ=5, 20, and 50, respectively. The cross sections decrease monotonically as ω' increases, vanishing as ω'→ E_e. All asymmetries are negative which indicates that the polarization of the recoil photon is oriented along the 𝐞_2 axis. For illustration, Fig. <ref> shows the asymmetry as a function of ξ in the intervals 0.001≥ξ≥ 1 and 5≥ξ≥100 on the left and right panels, respectively, at a fixed ω'/E_e=0.2. The alignment of the γ' polarization with respect to the 𝐞_1 axis changes from positive for small ξ<1 to negative for large ξ≫1. The asymmetry in the coplanar geometry A(φ=0,ω') is negative and close to -1 regardless of the field strength.   § CONCLUSION In summary, we have analyzed the polarization of the recoil photon γ' in non-linear Compton scattering. The polarization axes 𝐞_1,2 are chosen according to (<ref>). Polarization of γ' is described by the asymmetry A_12. The initial photon polarization is chosen along 𝐱 axis. In the reference frame with φ=0 (coplanar geometry), the 𝐞_1 and 𝐞_2 axes correspond to the γ' polarization, perpendicular or parallel to the beam polarization, respectively. We have analyzed the γ' polarization in the LUXE kinematics with electron energy E_e=17.5 GeV and optical laser with beam frequency 1.55 eV in a wide range ξ. The result of our study shows that for small ξ<1 and ω'/E_e≃0.2 the asymmetry A_12(ω') is large and positive, which indicates that the recoil photon is polarized along 𝐞_1 axis. At ultra-high beam intensity with ξ≫1 the asymmetry is negative and polarization of the recoil photon has tendency to be parallel to 𝐞_2 axis. The asymmetry in coplanar geometry A(φ=0,ω') is negative and close to -1 regardless of the field strength, which means that the recoil photon is polarized parallel to the beam polarization. At kinematical limit ω'→ E_e the asymmetry vanishes in all cases and the recoil photon becomes unpolarized. Finally, we note that the yield of pairs in electron-laser interaction based on the nlC⊗nlBW-type folding model was estimated at a probabilistic level in <cit.>. The initial polarization of photons was not taken into account. Accounting for the polarization of Compton photons can be a further development of such studies. § ACKNOWLEDGMENTS I am grateful to B. Kämpfer for our fruitful previous collaboration on studying different topics of strong-field QED and to O. V. Teryaev for discussions of various aspects of spin physics. 99 AM_Review A. Fedotov, A. Ilderton, F. Karbstein, B. King, D. Seipt, H. Taya, G. Torgrimsson. Advances in QED with intense background fields. Phys. Rep. 1010, 1-138 (2023); arXiv:2203.00019v2 [hep-ph]. ADPiazza A. Di. Piazza, C. Müller, K. Z. Hatsagortsyan, and C. H. Keitel. “Extremely high-intensity laser interactions with fundamental quantum systems". Rev. Mod. Phys. 84, 1177 (2012). RitusGroup A. I. Nikishov and V. I. Ritus. “Quantum processes in field of a plane electromagnetic wave and a constant field". Sov. Phys. JETP. 19, 529 (1964). Ritus-79 V. I. Ritus. “Quantum effects of the interaction of elementary particles with an intense electromagnetic field". J. Sov. Laser Res. (United States), 6:5, 497 (1985). DiPiazza:2020wxp A. Di Piazza, “Unveiling the transverse formation length of nonlinear Compton scattering,” Phys. Rev. A 103, no. 1, 012215 (2021) [arXiv:2009.00526 [hep-ph]]. Seipt:2020diz D. Seipt and B. King, “Spin- and polarization-dependent locally-constant-field-approximation rates for nonlinear Compton and Breit-Wheeler processes,” Phys. Rev. A 102, no. 5, 052805 (2020) [arXiv:2007.11837 [physics.plasm-ph]]. King:2020btz B. King and S. Tang, “Nonlinear Compton scattering of polarized photons in plane-wave backgrounds,” Phys. Rev. A 102, no. 2, 022809 (2020) [arXiv:2003.01749 [hep-ph]]. Ilderton:2020dhs A. Ilderton, B. King and S. Tang, “Toward the observation of interference effects in nonlinear Compton scattering,” Phys. Lett. B 804, 135410 (2020) [arXiv:2002.04629 [physics.atom-ph]]. Heinzl2020 T. Heinzl, B. King, and A. J. MacLeod. “The locally monochromatic approximation to QED in intense laser fields". Phys. Rev. A 102, 0163110 (2020) arXiv:2004.13035 [hep-ph] TitovPEPAN A. I. Titov, B. Kämpfer and H. Takabe, “Nonlinear Breit-Wheeler process in short laser double pulses,” Phys. Rev. D 98, no. 3, 036022 (2018) [arXiv:1807.04547 [physics.plasm-ph]]. Granz:2019sxb L. F. Granz, O. Mathiak, S. Villalba-Chávez and C. Müller, “Electron-positron pair production in oscillating electric fields with double-pulse structure,” Phys. Lett. B 793, 85 (2019) [arXiv:1903.06000 [physics.plasm-ph]]. Kampfer2023 U H. Acosta, B. Kämpfer. “Strong-field QED in Furry-picture momentum-space formulation: Ward identities and Feynman diagrams.” arXiv preprint arXiv:2303.12941, 2023. E-144 D. L. Burke,et al.. “Positron Production in Multiphoton Light-by-Light Scattering". Phys. Rev. Lett,79, 1626 (1997); C. Bamber,et al.. "Studies of nonlinear QED in collisions of 46.6 GeV electrons with intense laser pulses". Phys. Rev. D. 60, 092004(1999). LUXE_exp Conceptual design report for the LUXE experiment. H. Abramowicz et al. Eur. Phys. J. Spec. Top. 230: 2445-2560 (2021) https://doi.org/10.1140/epjs/s11734-021-00249-z E_320 S. Meuren, “Probing Strong-field QED at FACET-II (SLAC E-320) (2019)", E_320add S. Meuren et al. “On Seminal HEDP Research Opportunities Enabled by Colocating Multi-Petawatt Laser with High-Density Electron Beams,” arXiv:2002.10051 [physics.plasm-ph]. XFEL European X-Ray Free-Electron Laser Facility GmbH. Qiqi Yu, Dirui Xu, Baifei Shen, Thomas E. Cowan, and Hans-Peter Schlenvoigt. “X-ray polarimetry and its application to strong-field QED.” High Power Laser Science and Engineering, 2023. TK2020 A.I. Titov and B. Kämpfer. Non-linear Breit–Wheeler process with linearly polarized beams. Eur. Phys. J. D 74, 218 (2020); [arXiv:2006.04496 [hep-ph]. Akhiezer_Berestetsky A. I. Akhiezer and V. B. Berestetsky. Quantum electrodynamics, Interscience Publishers; Revised Edition (January 1, 1965). LL4 V. B. Berestetskii, E. M. Lifshitz, and L. P. Pitaevskii. “Quantum Electrodynamics”, Vol. 4 (Butterworth-Heinemann, 1982). Titov2020 A. I. Titov, A. Otto, B. Kämpfer “Multi-photon regime of non-linear Breit-Wheeler and Compton processes in short linearly and circularly polarized laser pulses". Eur. Phys. J. D 74 39 (2020). Boca-2009 M. Boca and V. Florescu. “Non-linear Compton scattering with a laser pulse". Phys. Rev. A 80, 053403 (2009), Erratum Phys. Rev. A 81, 039901 (2010). TAK2021 A.I. Titov U Hernandez, and B. Kämpfer Positron energy distribution in a factorized trident process Phys.Rev.A 104, 6, 062811 (2021).
http://arxiv.org/abs/2307.02158v1
20230705095941
Computation of excited states for the nonlinear Schr{ö}dinger equation: numerical and theoretical analysis
[ "Christophe Besse", "Romain Duboscq", "Stefan Le Coz" ]
math.AP
[ "math.AP", "cs.NA", "math.NA" ]
]Computation of excited states for the nonlinear Schrödinger equation: numerical and theoretical analysis C. Besse]Christophe Besse R. Duboscq]Romain Duboscq S. Le Coz]Stefan Le Coz This work was supported by the ANR LabEx CIMI (grant ANR-11-LABX-0040) within the French State Programme "Investissements d'Avenir". [Christophe Besse]Institut de Mathématiques de Toulouse ; UMR5219, Université de Toulouse ; CNRS, UPS IMT, F-31062 Toulouse Cedex 9, France [Christophe Besse][email protected] [Romain Duboscq]Institut de Mathématiques de Toulouse ; UMR5219, Université de Toulouse ; CNRS, UPS IMT, F-31062 Toulouse Cedex 9, France [Romain Duboscq][email protected] [Stefan Le Coz]Institut de Mathématiques de Toulouse ; UMR5219, Université de Toulouse ; CNRS, UPS IMT, F-31062 Toulouse Cedex 9, France [Stefan Le Coz][email protected] [2010]35Q55,35C08,65N06 Our goal is to compute excited states for the nonlinear Schrödinger equation in the radial setting. We introduce a new technique based on the Nehari manifold approach and give a comparison with the classical shooting method. We observe that the Nehari method allows to accurately compute excited states on large domains but is relatively slow compared to the shooting method. [ [ August 1, 2023 ================== § INTRODUCTION We consider the nonlinear Schrödinger equation iu_t+Δ u+f(u)=0 where u:×^d→ℂ and f∈𝒞^1(,) is an odd function extended to ℂ by setting f(z)=f(|z|)z/|z| for all z∈ℂ∖{0}. Equation (<ref>) arises in various physical contexts, for example in nonlinear optics or in the modelling of Bose-Einstein condensates. For physical applications as well as for its numerous interesting mathematical properties, (<ref>) has been the subject of an intensive research over the past forty years. We refer for example to the books of Cazenave <cit.>, Fibich <cit.> and Sulem and Sulem <cit.> for an overview of the known properties of (<ref>) and references. In this paper, we focus on special solutions of (<ref>), the so-called standing waves. They are solutions of the form e^iω tϕ(x) with ω>0 and ϕ satisfying {[ -Δϕ+ωϕ-f(ϕ)=0,; ϕ∈ H^1(ℝ^d) ∖{0}. ]. Among solutions of (<ref>), it is common to distinguish between the ground states, or least energy solutions, and the other solutions, the excited states. A ground state is a solution of (<ref>) minimizing among all solutions of (<ref>) the functional S, often called action, defined for v∈ H^1(ℝ^d) by S(v):=1/2∇ v_L^2^2+ω/2v_L^2^2-∫_ℝ^d F(v)dx, where F(z):=∫_0^ |z|f(s)ds for all z∈ℂ. An excited state is a solution of (<ref>) which is not a ground state. In general, we shall refer to any solution of (<ref>) as bound state. Sufficient and almost necessary hypotheses on f to ensure the existence of bound states are known since the fundamental works of Berestycki and Lions <cit.> and Berestycki, Gallouët and Kavian <cit.>. Under these hypotheses, it is proved in <cit.> that, except in dimension d=1 where all bound states are ground states, there exist ground states and infinitely many excited states. Note that the terminology ground state may be understood in different ways depending on the context. Some authors may call ground state any minimizer of the energy functional under the mass constraint. For power-type mass-subcritical nonlinear Schrödinger equations, this definition will coincide with ours, However, in other settings such as for the power-type mass-supercritical nonlinear Schrödinger equations, the two definitions do not coincide any more (there is no ground states in the later sense). With our definition of ground states, it has long been established for power-type nonlinearities that ground states are positive, radial, and unique (see <cit.>). On the other hand, excited states will necessarily change sign, and may even be complex valued (see e.g. <cit.>). They also need not to be radial, nor to even have any kind of symmetry group (as was shown recently in <cit.>). There are numerous works devoted to the numerical calculations of ground states. Among many others, we find the seminal work of Bao and Du <cit.> devoted to the gradient flow with discrete normalisation which has been used in many settings (including by the authors of the present paper in the context of quantum graphs <cit.>). We also mention the work of Choi and McKenna <cit.> devoted to the numerical implementation of the Mountain Pass approach, which has also been followed by numerous extension and improvements (see e.g. <cit.>). The Mountain Pass approach can be modified to compute nodal states, as was done by Costa, Ding and Neuberger <cit.>, whose approach was later followed by Bonheure, Bouchez, Grumiau, and van Schaftingen <cit.>. Not many other works in the literature are devoted to the calculation of excited states, and to our knowledge this paper is the first one to present an approach based on the Nehari manifold. Our goal in this article is to develop numerical methods for the computation of excited states. We will also take this opportunity to study numerically some properties of the excited states and establish some conjectures that could be further investigated theoretically. The two methods that we are considering are the shooting method and the Nehari method. The shooting method is a classical method for the computation of solutions of boundary value problems. It consists in transforming the d-dimensional partial differential equation into an ordinary differential equation by considering radial solutions. The boundary value problem for the ordinary differential equation is then converted into an initial value problem, which can be easily solved using a standard scheme such as the Runge-Kutta 4th order method. In the present case, since we are working with an elliptic problem, we are led to consider initial conditions on the solution and its first derivative. The first derivative is necessarily set to 0 since its originating from a smooth radial function. We are thus left with the initial value of the solution which will be used as a parameter to be chosen in order to recover the boundary condition at infinity. The method is described in Section <ref>. The idea of the Nehari method originates from the variational characterization of bound states as minimizers of the action functional under constraints build upon the Nehari functional. In the case of the ground state, we simply minimize the action among functions for which the Nehari functional vanishes. To obtain excited states more elaborate constructions are required. For example, one may minimize the action on functions having non-trivial positive and negative parts both satisfying the Nehari constraint. We establish the existence of a solution (called least energy nodal minimizer) to this variational problem in a radial setting in Theorem <ref>. We refer to Section <ref> for more details on the theoretical background. The numerical approach is based on a projected gradient method which consist in one step of gradient flow for the action followed by a projection on the chosen space of constraints. The method is described in Section <ref>. While being a standard theoretical tool, the Nehari approach has been seldom used in numerical analysis. To our knowledge, this paper is the first to investigate the computation of excited states using a Nehari approach. While the shooting method is the go-to method for finding excited states, we identified some limitations of this approach. Indeed, it can only compute radial excited states, while the Nehari method could be extended to non-radial problems. Moreover, even in the radial case, the length of the interval on which the solution is computed is limited for the shooting method due to propagation of the error from the initial condition. This issue is not present for the Nehari approach. On the other hand, the convergence of the Nehari approach is much slower compared to the shooting method which has a maximal number of iteration for a given precision. The two methods are discussed in Section <ref>. We conclude this paper by some numerical experiments. We investigate numerically the relation between the initial values of the bound states and their total number of nodes. We also study the positions of the nodes and the extremal values between two consecutive nodes. For each case, we provide some guess on the underlying behavior. This material is presented in Section <ref>. § THEORETICAL APPROACH FOR THE MINIMIZATION OVER THE NEHARI MANIFOLD Our goal in this section is to present some theoretical elements around ground states and excited states and the Nehari minimization approach. We start by reviewing some well-known facts for the existence and properties of ground states and excited states. We also present the classical variational characterization of ground states as minimizers on the Nehari manifold (see Proposition <ref>). The rest of the section is then devoted to the statement and proof of a characterization of the first nodal radial excited state as a minimizer on the Nehari nodal set (see Theorem <ref>). While the approach to obtain Theorem <ref> borrows elements from the existing literature, the result itself seems to be new. We consider (<ref>) with solutions belonging to H^1(^d,). A typical example for f is the power-type nonlinearity f(u)=|u|^p-1u, 1<p<2^*-1, where 2^* is the critical Sobolev exponent, i.e. 2^*=2d/d-2 if d≥ 3, 2^*=∞ if d=1,2. More generally, we assume that f:→ verifies the following hypotheses (which are not optimal, but sufficient for our purpose). * (regularity) The function f is continuous and odd. * (subcriticality) There exists 1<p<2^*-1 such that for large s, |f(s)|≲ |s|^p. * (superlinearity) At 0, lim_s→0f(s)/s=0. * (focusing) There exists ξ_0>0 such that F(ξ_0)=∫_0^ξ_0f(s)ds>ξ_0^2/2. Under <ref>-<ref>, it is well known (see <cit.>) that there exist ground state solutions, i.e. solutions with minimal action (see (<ref>) for the definition of the action) among all possible solutions to (<ref>). Our definition of ground states as minimal action solutions is very common in the analysis of nonlinear elliptic partial differential equations. The terminology ground state has however several other acceptations in other contexts. E.g. when working with Schrödinger equations modelling Bose-Einstein condensation, one might call ground state a minimizer of the energy on fixed mass constraint. Uniqueness of the ground state holds if f satisfies in addition to <ref>-<ref> some complementary requirements, e.g. if f is of power-type, see <cit.>. When d≥ 2, it was proved in <cit.> that there exists an infinite sequence of excited states, i.e. solutions to (<ref>) whose action is not minimal (actually, the corresponding sequence of actions tends to infinity). Moreover, under additional assumptions on the nonlinearity (e.g. if d=2 and f is of power-type), then there exists only one radial excited state with a given number of nodes, see <cit.>. Recall that the action functional S:H^1(^d)→ is defined in (<ref>). It is a 𝒞^1 functional (see e.g. <cit.>) and u is a solution of (<ref>) if and only if S'(u)=0. We define the Nehari functional by I(u)=S'(u)u=∇ u_L^2^2+u_L^2^2-∫_^df(u)udx. The Nehari manifold is defined by 𝒩={u∈ H^1(^d)∖{0}:I(u)=0}. Define the Nehari level by m_𝒩=inf{S(v):v∈𝒩}. In addition to <ref>-<ref>, we assume the following. * The function s→f(s)/s is increasing for s>0. * (Ambrosetti-Rabinowitz superquadraticity condition) There exists θ>2 such that θ F(s)<sf(s) for all s>0. Then under <ref>-<ref>, the following holds (see e.g. <cit.> and the reference cited therein). For every sequence (u_n)∈𝒩 such that lim_n→∞S(u_n)=m_𝒩, there exist u_∞∈𝒩 and (y_n)⊂^d such that lim_n→∞u_n(·-y_n)-u_∞_H^1=0. Moreover, u_∞ is a ground state solution of (<ref>). We now want to construct variational characterizations of excited states which can be used in numerical approaches. Based on Proposition <ref>, it is natural to try to generalize the Nehari manifold approach. Several directions of investigations are possible. The most natural one is probably to define the Nehari nodal set as 𝒩_nod={u∈ H^1(^d):I(u^+)=0,I(u^-)=0,u^±≠0}. where u^+=max(u,0) and u^-=max(-u,0). Define the Nehari nodal level by m_𝒩_nod=inf{S(v):v∈𝒩_nod}. An approach based on minimization of the energy on mass constraints for the positive and negative part of the function cannot work, as the minimizer that we might obtain would be (formally) a solution of an equation of the form E'(u)+λ_+M'(u_+)+λ_-M'(u_-) = 0, with potentially different Lagrange multipliers λ_±. This issue is avoided with the Nehari approach. We have m_𝒩_nod=2m_𝒩. Indeed, let u∈𝒩_nod. Since u^+ and u^- are both in 𝒩, we have S(u)=S(u^+)+S(u^-)≥ 2m_𝒩, and therefore m_𝒩_nod≥ 2m_𝒩. Let u_∞ be a minimizer for m_𝒩 and for (y_n)⊂^d, define u_n=u_∞(·+y_n)-u_∞(·-y_n). When |y_n|→∞, we have S(u_n)→ 2m_𝒩, and this proves (<ref>). Unfortunately, m_𝒩_nod is not achieved. Indeed, suppose on the contrary that u_nod realizes the minimum for m_𝒩_nod. Since u_nod^±∈𝒩 and m_𝒩_nod=2m_𝒩, both u_nod^+ and u_nod^- realize the minimum for 𝒩 and are ground states of (<ref>). In particular, they are both regular, and by the maximum principle, both have to be positive or negative on the whole ^d, which is a contradiction. Therefore m_𝒩_nod is not achieved - from (<ref>), we can easily guess that this is due to a loss of compactness in the minimizing sequences. On the other hand, if the power nonlinearity |u|^p-1u is replaced by a Choquart/Hartree term (e.g. (|x|^-1*|u|^2)u in ^3), then it is possible to obtain nodal critical points by minimizing S on 𝒩_+∩𝒩_-, see <cit.>. To overcome this issue, we decide to work in a radial setting (recall from Strauss' Lemma <cit.> that the injection H^1_rad(^d)↪ L^q(^d), 2<q<2^* is compact whenever d≥2). Define 𝒩_nod,rad={u∈ H^1_rad(^d):I(u^+)=0,I(u^-)=0,u^±≠0}, and m_𝒩_nod,rad=inf{S(v):v∈𝒩_nod,rad}. Then the following result gives the existence of a minimizer for m_𝒩_nod,rad. For every sequence (u_n)∈𝒩_nod,rad such that lim_n→∞S(u_n)=m_𝒩_nod,rad there exists u_∞∈𝒩_nod,rad such that lim_n→∞u_n-u_∞_H^1=0. Moreover, u_∞ is a nodal solution of (<ref>) with exactly two nodal domains. We say that u_∞ is a least nodal excited state. Minimizing on 𝒩_nod,rad is intrinsically more difficult that minimizing on 𝒩. Indeed, 𝒩_nod,rad is not a manifold, as the functionals u∈ H^1(^d)→∇ u^±_L^2^2 are not 𝒞^1 (see the discussion after Theorem 18 in <cit.>). The rest of this section is devoted to the proof of Theorem <ref>. We start with some preliminary lemmas. The constant 0 is a local minimum for S. Let u∈ H^1(^d)∖{0}. There exists a unique s_u∈ (0,∞) such that I(s_u u)=0. Moreover, S(s_u u)=max_s∈(0,∞)S(su)>0. If I(u)<0, then s_u<1, whereas if I(u)>0, then s_u>1 and S(u)>0. Let u∈ H^1(^d)∖{0} and define h:(0,∞)→ by h(s):=S(su)=s^2/2u_H^1^2-∫_ F(su)dx. Since F is differentiable, so is h and we have h'(s)=s u_H^1^2-∫_ f(su)udx. Remark that sh'(s)=I(su). Due to <ref>, the derivative h' can vanish only once in (0,∞). Indeed, assume by contradiction that there exist 0<s_1<s_2 such that h'(s_1)=h'(s_2)=0. Then we have ∫_f(s_1u)/s_1uu^2dx=∫_f(s_2u)/s_2uu^2dx. Since by <ref> s→f(s)/s is increasing, we have a contradiction. Since f(s)=o(s) for s→0, we have S(su)>0 if s is small enough. On the other hand, <ref> implies that ∂_sF(s)/s^2=f(s)s-2F(s)/s^3>(θ-2)/sF(s)/s^2, i.e. F is superquadratic and therefore for s large we must have S(su)<0. Hence h' vanishes exactly once at s_u, h'(s)>0 for s<s_u and h'(s)<0 for s>s_u. Moreover, h(s_u)=max_s∈(0,∞)h(s). Since S(su)=h(s) and I(su)=sh'(s), this concludes the proof. Define 𝒩_rad:=𝒩∩ H_rad^1(^d). We have the following compactness result. Let d≥ 2. Let (u_n)⊂𝒩_rad and assume that S(u_n) is bounded. Then (u_n) is bounded in H^1(^d) and there exists u_∞∈ H_rad^1()∖{0} such that (up to extraction of a subsequence) we have u_n⇀ u_∞ weakly in H_rad^1(). Moreover, there exists s_∞>0 such that s_∞ u_∞∈𝒩_rad and S(s_∞ u_∞)≤lim inf_n→∞ S(u_n). Take a sequence (u_n)∈𝒩_rad and assume that S(u_n) is bounded. Arguing by contradiction, we assume that ν_n:=u_n_H^1→∞. Define a sequence (v_n)⊂ H_rad^1(^d) by v_n:=u_n/ν_n. Then (v_n) is bounded in H_rad^1(^d) and there exists v_∞∈ H_rad^1(^d) such that v_n⇀ v_∞ weakly in H_rad^1(^d). We claim that v_∞≠ 0. Arguing again by contradiction, assume that v_∞=0. By Lemma <ref>, for any s>0, we have S(u_n)=S(ν_nv_n)≥ S(sv_n)=s^2/2-∫_ F(sv_n)dx. Since d≥2, the injection H_rad^1(^d)↪ L^q(^d) is compact for any 2<q<2^*. Combined with <ref>, this implies weak continuity of u→∫_^dF(u)dx. Since we assumed v_∞=0, for n large we have S(u_n)≥s^2/4, which is a contradiction since S(u_n) is bounded and s can be chosen a large as desired. Hence v_∞≠ 0. Moreover, since I(u_n)=0, we have 0≤S(u_n)/ν_n^2=1/2-1/ν_n^2∫_F(ν_n v_n)dx. From <ref>, we have F(s)≥ s^θ, thus lim_s→∞F(s)/s^2=∞. Recall that upon extraction of subsequences, v_n⇀ v_∞≠ 0 weakly in H_rad^1(^d) and v_n(x)→ v_∞(x) a.e. By Fatou's Lemma, this implies 1/ν_n^2∫_F(ν_n v_n)dx=∫_F(ν_n v_n)/(ν_n v_n)^2v_n^2dx→∞ as n→∞. This leads to a contradiction in (<ref>). Therefore, (ν_n)=(u_n_H^1) has to remain bounded. As a consequence, there exists u_∞∈ H_rad^1(^d) such that u_n⇀ u_∞. We can prove that u_∞≠ 0 in the same way as we did for v_∞. Moreover, there exists s_∞ such that s_∞ u_∞∈𝒩_rad and we have S(s_∞ u_∞)≤lim inf_n→∞S(s_∞ u_n)≤lim inf_n→∞S(u_n). This concludes the proof. Let (u_n) be a minimizing sequence for m_𝒩_nod,rad, i.e. (u_n)⊂𝒩_nod,rad and S(u_n)→ m_𝒩_nod,rad as n→∞. Since (u_n^±)⊂𝒩, by Lemma <ref> the sequence (u_n^±) is bounded in H_rad^1(^d) and there exists ũ_∞^±∈ H_rad^1(^d)∖{0} such that u_n^±⇀ũ_∞^± weakly in H_rad^1(^d) and a.e. In particular, pointwise convergence implies ũ_∞^+ũ_∞^-=0 a.e. Let s^± be such that I(s^±ũ_∞^±)=0 and define u_∞=s^+ũ_∞^+-s^-ũ_∞^-. By construction u_∞∈𝒩_nod,rad. Moreover, S(u_∞)=S(s^+ũ_∞^+)+S(s^-ũ_∞^-)≤lim inf_n→∞(S(u_n^+)+S(u_n^-))=lim_n→∞S(u_n)=m_𝒩_nod,rad. This proves that u_∞ is a minimizer for m_𝒩_nod,rad. Let us show that in fact s^±=1 and the sequence (u_n) converges strongly in H_rad^1(^d) toward u_∞. By <ref> and Strauss' Lemma, the functional u↦∫_^df(u)udx is weakly continuous on H_rad^1(^d). This implies I(ũ_∞^±)≤lim inf_n→∞ I(u_n^±)=0. Hence by Lemma <ref> we have s^±≤ 1. Moreover m_𝒩_nod,rad≤ S(u_∞)≤lim inf_n→∞ (S(u_n^+)+S(u_n^-))=m_𝒩_nod,rad and weak continuity of the nonlinear part of S imply u_∞_H^1^2=lim inf_n→∞(u_n^+_H^1^2+u_n^-_H^1^2). Moreover lim inf_n→∞(u_n^+_H^1^2+u_n^-_H^1^2)≥ũ_∞^+-ũ_∞^-_H^1^2=1/(s^+)^2s^+ũ_∞^+_H^1^2+1/(s^-)^2s^-ũ_∞^-_H^1^2 ≥1/max(s^-,s^+)^2(s^+ũ_∞^+_H^1^2+s^-ũ_∞^-_H^1^2)=1/max(s^-,s^+)^2u_∞_H^1^2 =1/max(s^-,s^+)^2lim inf_n→∞(u_n^+_H^1^2+u_n^-_H^1^2). where the first inequality is from weak convergence and the last equality follows from (<ref>). Since we already know that s^±≤ 1, this implies that s^±=1 and strong convergence of (u_n) towards u_∞ in H^1(^d). We now show that u_∞ is a critical point of S. Recall that 𝒩_nod,rad is not a manifold and we cannot use a Lagrange multiplier argument for the minimizers of m_𝒩_nod,rad. Instead, we shall use the quantitative deformation lemma of Willem <cit.>, which we recall in Appendix (see Lemma <ref>). Arguing by contradiction, we assume that S'(u_∞)≠ 0. Then there exist δ,μ>0 such that v-u_∞_H^1≤ 3δS'(u_∞)_H^1≥μ. Define D=[1/2,3/2]×[1/2,3/2] and g(s,t)=su_∞^+-tu_∞^-. Let s,t>0 be such that either s≠1 or t≠1. Then from <ref> we infer that S(su_∞^+-tu_∞^-)=S(su_∞^+)+S(tu_∞^-)<S(u_∞^+)+S(u_∞^-)=m_𝒩_nod,rad. Consequently, S(g(s,t))=m_𝒩_nod,rad if and only if s=t=1 and otherwise S(g(s,t))<m_𝒩_nod,rad. Hence β:=max_∂ D S∘ g<m_𝒩_nod,rad. Let :=min(m_𝒩_nod,rad-β/4,μδ/8). The deformation lemma <ref> gives us a deformation η verifying (a) η(1,v)=v if v∉ S^-1([m_𝒩_nod,rad-2,m_𝒩_nod,rad+]), (b) S(η(1,v))≤ m_𝒩_nod,rad- for every v∈ H_rad^1(^d) such that v-u_∞_H^1≤δ and S(v)≤ m_𝒩_nod,rad+, (c) S(η(1,v))≤ S(v) for all v∈ H^1_rad(^d). In particular, we have max_(s,t)∈ DS(η(1,g(s,t)))<m_𝒩_nod,rad. To obtain a contradiction we prove that η(1,g(D))∩𝒩_nod,rad≠∅. Define h(s,t):=η(g(1,g(s,st))) and ψ_0(s,t):=(I(s u_∞^+),I(t u_∞^-)), ψ_1(s,t):=(I(h^+(s,t)),I(h^-(s,t))), As I(s u_∞^±)>0 (resp. <0) if 0<s<1 (resp. s>1), the degree of ψ_0 (see e.g. <cit.> for the definition and basic properties of the degree) is Deg(ψ_0,D,0)=1. From (a) and (<ref>), we have g=h on ∂ D. Therefore, ψ_0=ψ_1 on ∂ D, which implies Deg(ψ_1,D,0)=1=Deg(ψ_0,D,0)=1. Therefore, there exists (s,t)∈ D such that ψ_1(s,t)=0. That means h(s,t)∈𝒩_nod,rad, a contradiction with (<ref>) and the definition of m_𝒩_nod,rad. Therefore u_∞ is a critical point of S. It remains to prove that u_∞ has exactly two nodal domains. Observe first that u_∞ is also a minimizer for m_alt = inf{∫_^d-F(u)+1/2f(u)udx:I(u^+)≤ 0, I(u^-)≤0, u^±≠0}. By <ref>, 2F(s)-f(s)s<0 for any s. As a consequence, the minimizers of m_alt verify I(u^+)=I(u^-)=0. Indeed, arguing by contradiction and assuming e.g. I(u^+)<0, one could replace u^+ by tu^+ with 0<t<1 such that I(tu^+)=0, which would give a minimizer of m_alt for a lower value and provides the contradiction. Assume now that u_∞ has more than one nodal region, i.e that there exists s_1<s_2<s_3 such that u_∞>0 on (0,s_1), u_∞<0 on (s_1,s_2) and u_∞>0 on (s_2,s_3). Let u_1 be such that u_1=u_∞ on (0,s_1) and u_1=0 elsewhere. Define similarly u_2 for (s_1,s_2) and u_3^±=u_∞^±, on (s_3,∞). Since I(u^+)=0 we either have I(u_1)≤0 or I(u_3^+)≤ 0. Without loss of generality, assume that I(u_1)≤0. We may then construct ũ_∞ such that ũ_∞=u_1 on (0,s_1), ũ_∞=u_2 on (s_1,s_2) and ũ_∞=0 on (s_3,∞). As it is not containing the u_3^± parts, ũ_∞ would be a minimizer for m_alt for a lower value than u_∞, which provides a contradiction and finishes the proof. § THE SHOOTING METHOD We describe in this section the shooting method, its theoretical background and its practical implementation. Radial solutions of (<ref>) can be obtained as solutions of the ordinary differential equation -u”(r)-d-1/ru'(r)+ω u(r)-f(u(r))=0. They should satisfy the boundary conditions u'(0)=0, lim_r→∞u(r)=0. It was established in <cit.> that, under convexity assumptions on f (which hold for example in dimension d=2 when f is of power-type), that for any k∈ℕ, the equation (<ref>) admits exactly one solution having exactly k nodes. More precisely, it was proved in <cit.> that there exists an increasing sequence (α_k)⊂ (0,∞) such that for any k∈ℕ and for any α∈ (α_k-1,α_k) (with the understanding that α_-1=0), the solution of the Cauchy Problem -u”(r)-d-1/ru'(r)+ω u(r)-f(u(r))=0, u(0)=α, u'(0)=0, denoted by u(·;α), has exactly k nodes on [0,∞). Moreover, when (and only when) α=α_k, the solution u(·;α_k) of (<ref>) verifies lim_r→∞ u(r;α_k)=0. In other words, finding the k-th radial state amounts to finding the corresponding α_k. We expect that k∼α_k^2 (see Figure <ref>). Observe that when α∈(α_k,α_k+1), then the solution of the differential equation such that u(0)=α has exactly k nodes (but does not converge to 0 at infinity). The so-called shooting method then consists in a simple application of the bisection principle to the search of α_k. The idea is the following. Start with an interval [α_*,α^*] such that u(·;α_*) and u(·;α^*) have respectively k-1 and k nodes on (0,∞). Define the middle of [α_*,α^*] by c_*=(α_*+α^*)/2. If the solution u(·;c_*) with initial data c_* has k-1 nodes on (0,∞), then reproduce the procedure on [c_*,α^*], otherwise iterate the procedure on [α_*,c_*]. The interval size is divided by two at each step and its bounds converge towards α_k. To compute the solution of (<ref>), we rewrite it as a first order system U'(r)= AU+F(r,U(r)), U(0)= [ α; 0 ] where U(r)=u(r)u'(r), A=[ 0 1; ω 0 ], F(r,U(r))= [ 0; d-1/ru'(r)+f(u(r)) ]. The solution of the initial value problem (<ref>) is then computed using the classical Runge-Kutta 4th order method. The only difficulty concerns the value to affect to F at r=0. The singularity can in fact be raised when u is sufficiently regular and the initial condition contains u'(0)=0. Indeed, assuming that u'∈𝒞^2 and writing the Taylor expansion at 0, we have u'(r)=u'(0)+u”(0)r+u”'(θ)r^2/2=u”(0)r+u”'(θ)r^2/2, θ∈(0,r). Assuming that u verifies (<ref>), we have -u”(r)-(d-1)u”(0)+u”'(θ)(d-1)r/2+ω u(r)-f(u(r))=0. Letting r tend to 0, we obtain -du”(0)+ω u(0)-f(u(0))=0. Therefore, we choose to set Fu(0)0=0(ω u(0)-f(u(0)))/d. The algorithm is given in Algorithm <ref>. Note that in Algorithm <ref>, it is understood that b has been chosen large enough so that the initial data of the excited state with the required number of nodes lies in [0,b]. In pratice, such a b can be obtained by inspection, taking larger and larger values until the solution of (<ref>) with initial data α=b has a sufficiently large number of nodes. Examples of solutions computed with the Shooting method are presented in Figure <ref>. § THE NEHARI METHOD In this section, we assume that f(u) = |u|^p-1u with p∈ (1,1+4/(d-2)_+). and we consider the problem -Δ u+u-|u|^p-1u=0 on Ω where Ω⊂ℝ^d is a domain or ^d itself. We recall that the action functional S is given by S(u) = 1/2∇ u_L^2(Ω)^2 + 1/2u_L^2(Ω)^2 -1/p+1u_L^p+1(Ω)^p+1, and the Nehari functional I(u) = ∇ u_L^2(Ω)^2 + u_L^2(Ω)^2 -u_L^p+1(Ω)^p+1. The Nehari manifold on Ω is then given by 𝒩 ={u ∈ H^1_0(Ω): I(u) = 0 }. To compute numerically nodal solutions when Ω=^d, an approach based on Theorem <ref> would provide only the least nodal excited state. As we would also like to compute higher order excited states, we will adopt a slightly different setting and we base our the approach on the theoretical result obtained by Bartsch and Willem <cit.>, which we describe now. Let Ω(ρ,σ)⊂^d be the annulus of ^d of radii ρ and σ: Ω(ρ,σ)={x∈^d:ρ≤ |x|<σ}. To obtain radial nodal excited states, we first work on each Ω(ρ,σ). For all 0≤ρ<σ≤∞ there exists a positive minimizer w_ρ,σ∈ H^1_0(Ω(ρ,σ)) of the action on the Nehari manifold on Ω(ρ,σ), i.e. S(w_ρ,σ)=m_ρ,σ:=min{S(u):u∈𝒩}, where it is understood that we have replaced Ω by Ω(ρ,σ) in the definitions of S, I and 𝒩. Nodal excited states are obtained by pasting together different w_ρ_j,σ_j and optimizing over (ρ_j,σ_j). More precisely, we have the following result from <cit.>. For every integer N_nodes≥ 0 there exists a radial solution u_N_nodes of (<ref>) having exactly N_nodes nodes, i.e. there exist 0=ρ_0<ρ_1<⋯<ρ_N_nodes<ρ_N_nodes+1=∞ such that u_k^-1(0)={x∈^d:|x|=ρ_j, for some j=1,…,N_nodes}. The set (ρ_j)_j=0,…,N_nodes+1 is the minimizer of min{∑_j=0^N_nodesm_ρ_j,ρ_j+1:0=ρ_0<ρ_1<⋯<ρ_N_nodes<ρ_N_nodes+1=∞}. The function u_N_nodes is constructed with the w_ρ_j,ρ_j+1 corresponding to m_ρ_j,ρ_j+1: u_N_nodes(x)=(-1)^jw_ρ_j,ρ_j+1(x), if ρ_j≤ |x|<ρ_j+1. Of course, if one wishes to compute numerically such a sequence, we must restrict ourselves to a bounded interval. That is, instead of considering the problem on the whole line ℝ, we restrict ourselves to the interval [0,R] for a given R>0 sufficiently large. We consider the space ℋ_rad,R^1 : = { u: [0,R]↦ℝ: ∫_0^R( |u'(r)|^2 + |u(r)|^2)r^d-1 dr <+∞, u(R) = 0}, on which we define the functionals S and I by the same formula, but restricted to the interval [0,R]. Similarly, we define the nodal Nehari space 𝒩_N_nodes,R : = {u ∈ℋ_rad,R^1: u^-1(0) = {ρ_1,ρ_2,…, ρ_N_nodes}, I(u_|[ρ_k,ρ_k+1]) = 0, 0≤ k ≤ N_nodes}, where 0 = ρ_0 < ρ_1<…<ρ_N_nodes+1 = R are depending on u. Let ℋ_N_nodes,R : = {u ∈ℋ_rad,R^1| u^-1(0) = {ρ_1,ρ_2,…, ρ_N_nodes}}. Then, we define the projection Π_𝒩_N_nodes,R: ℋ_N_nodes,R↦𝒩_N_nodes,R by Π_𝒩_N_nodes u : = ∑_k = 0^N_nodesu_|[ρ_k,ρ_k+1]( ∇ u_|[ρ_k,ρ_k+1]^2_L^2 + u_|[ρ_k,ρ_k+1]^2_L^2/u_|[ρ_k,ρ_k+1]_L^p+1^p+1)^1/(p-1). A natural method to compute the solution of min_u∈𝒩_N_nodes,RS(u), is to use the so-called projected gradient descent given by {[ u^(0)∈𝒩_N_nodes,R,; u^(n+1) = Π_𝒩_N_nodes,R(u^(n) + τ S'(u^(n))), ∀ n≥ 0, ]. where τ∈ℝ^+ is the time-step, which also writes as {[ u^(0)∈𝒩_N_nodes,R,; u^(n+1) = Π_𝒩_N_nodes,R(u^(n) - τ(Δ_rad,Ru^(n) -u^(n) + |u^(n)|^p-1u^(n))), ∀ n≥ 0. ]. By setting π_N([0,R]) : = {r_k := (k-1)h, 1≤ k≤ N+1} with h = R/N, we can consider a discretization of Δ_rad,R by finite differences acting on ℝ^N. That is, for any u∈ℋ_rad,R^1 we use the second order approximations, for any 2≤ k≤ N-1, u”(r_k) ≈u(r_k+1) - 2 u(r_k) + u(r_k-1)/h^2 u'(r_k) ≈u(r_k+1) - u(r_k-1)/2h, to deduce the following approximation Δ_rad,R u(r_k) ≈u(r_k+1) - 2 u(r_k) + u(r_k-1)/h^2 + d-1/r_ku(r_k+1) - u(r_k-1)/2h. Furthermore, the boundary conditions yield Δ_rad,R u (r_1) ≈2(u(r_2)-u(r_1))/h^2 Δ_rad,R u (r_N) ≈ - 2 u(r_N) + u(r_N-1)/h^2 - d-1/r_Nu(r_N-1)/2h. In the end, we obtain the matrix [Δ_rad,R]_i,j : = {[ 2/h^2, (i,j) = (1,2),; -2/h^2, 1≤ i≤ Nj = i,; 1/h^2 - (d-1)/2hr_i, 2≤ i≤ Nj = i - 1,; 1/h^2 + (d-1)/2hr_i, 2≤ i≤ N-1j = i + 1,; 0, . ]. By denoting u = (u(r_j))_1≤ j≤ N∈ℝ^N as the discretization of u on π_N([0,R]), we deduce that Δ_rad,R u(r_i) ≈ ([Δ_rad,R]u)_i, ∀ i∈{1,…,N}. We also need to discretize the positions of the nodes {ρ_1,…,ρ_N_nodes} and the projection on 𝒩_N_nodes,R. For any u∈ℋ_rad,R^1, each node of u is located in an interval [r_j,r_j+1], for a certain 1≤ j≤ N, where u(r_j+1)u(r_j)<0. By a linear approximation of u on each interval [r_j,r_j+1], with 1≤ j≤ N, an approximation of a node ρ belonging in [r_j,r_j+1] will be given by ρ≈ϱ = r_j u(r_j+1) - r_j+1 u(r_j)/u(r_j+1)-u(r_j). Concerning the projection on 𝒩_N_nodes,R, we need approximations of integrals and we choose to rely on a trapezoidal rule. This yields, for any v∈𝒞([0,R]), ∫_a^b v(r) r^d-1dr ≈v(b)b^d-1 + v(a)a^d-1/2 (b-a). We denote (ρ_k)_0≤ k≤ N_nodes+1 the nodes of u with ρ_0 = 0 and ρ_N_nodes+1 = R. For any u∈ℋ_rad,R^1 and 0≤ k≤ N_nodes, we deduce the approximation (using the trapezoidal rule) u_|[ρ_k,ρ_k+1]_L^p^p = ∫_ρ_k^ρ_k+1 |u(r)|^p r^d-1dr ≈ h ∑_j = m(ϱ_k)^ℓ(ϱ_k+1) |u(r_j)|^pr_j^d-1 + (r_m(ϱ_k) - ϱ_k )- h/2 |u(r_m(ϱ_k))|^pr_m(ϱ_k)^d-1 + (ϱ_k+1 - r_ℓ(ϱ_k+1))- h/2 |u(r_ℓ(ϱ_k+1))|^pr_ℓ(ϱ_k+1)^d-1 : = 𝔏^p_k(u), where m(ρ) = min{j∈{1,…,N}: r_j ≥ρ}, ℓ(ρ) = max{j∈{1,…,N}: r_j≤ρ} and ϱ_k is the approximation of ρ_k obtained by (<ref>). Furthermore, we have, by using (<ref>), for any u∈ℋ_rad,R^1 and 1≤ k≤ N_nodes-1, ∇ u_|[ρ_k,ρ_k+1]_L^2^2 ≈ h ∑_j = m(ϱ_k)^ℓ(ϱ_k+1)|u(r_j+1) - u(r_j-1)/2h|^2r_j^d-1 + (r_m(ϱ_k) - ϱ_k )- h/2|u(r_m(ϱ_k)+1)-u(r_m(ϱ_k)-1)/2h|^2r_m(ϱ_k)^d-1 + r_m(ϱ_k) - ϱ_k /2|u(r_m(ϱ_k))-u(r_m(ϱ_k)-1)/h|^2ϱ_k^d-1 + (ϱ_k+1 - r_ℓ(ϱ_k+1))- h/2|u(r_ℓ(ϱ_k+1)+1)-u(r_ℓ(ϱ_k+1)-1)/2h|^2r_ℓ(ϱ_k+1)^d-1 + ϱ_k+1 - r_ℓ(ϱ_k+1)/2|u(r_ℓ(ϱ_k+1)+1)-u(r_ℓ(ϱ_k+1))/h|^2ϱ_k+1^d-1 : = 𝔑_k(u), where we used the following finite differences approximation u'(ρ) ≈u(r_j)- u(r_j-1)/r_j - r_j-1, with j∈{1,…,N} such that ρ is a nod belonging in (r_j-1,r_j). We notice that, in the case k = N_nodes, the previous expression is replaced with ∇ u_|[ρ_N_nodes,R]_L^2^2 ≈ h ∑_j = m(ϱ_N_nodes)^N-1|u(r_j+1) - u(r_j-1)/2h|^2r_j^d-1 + (r_m(ϱ_N_nodes) - ϱ_N_nodes )- h/2|u(r_m(ϱ_N_nodes)+1)-u(r_m(ϱ_N_nodes)-1)/2h|^2r_m(ϱ_N_nodes)^d-1 + r_m(ϱ_N_nodes) - ϱ_N_nodes/2|u(r_m(ϱ_N_nodes))-u(r_m(ϱ_N_nodes)-1)/h|^2ϱ_N_nodes^d-1 + h/2|u(r_N-1)/2h|^2r^d-1_N : = 𝔑_N_nodes(u). For the case k = 0, we use instead ∇ u_|[0,ρ_1]_L^2^2 ≈ h ∑_j = 2^ℓ(ϱ_1)|u(r_j+1) - u(r_j-1)/2h|^2r_j^d-1 + (ϱ_1 - r_ℓ(ϱ_1))- h/2|u(r_ℓ(ϱ_1)+1)-u(r_ℓ(ϱ_1)-1)/2h|^2r_ℓ(ϱ_1)^d-1 + ϱ_1 - r_ℓ(ϱ_1)/2|u(r_ℓ(ϱ_1)+1)-u(r_ℓ(ϱ_1))/h|^2ϱ_1^d-1 : = 𝔑_0(u). Thanks to (<ref>) and (<ref>)-(<ref>)-(<ref>), we can deduce an approximation of the projection Π_𝒩_N_nodes. By denoting 𝔈_k(u) = 1/2𝔑_k(u) + 1/2𝔏^2_k(u), we obtain Π_𝒩_N_nodes u ≈𝔓_N_nodesu : = ∑_k = 0^N_nodesu_|{m(ϱ_k), ℓ(ϱ_k+1)}(𝔈_k(u)/𝔏^p+1_k(u))^1/p-1, where, for 1≤ j≤ N, (u_|{m(ρ_k), ℓ(ρ_k+1)})_j = {[ u_j, m(ρ_k)≤ j≤ℓ(ρ_k+1),; 0, . ]. We can now gives a completely discretized version of the projected gradient descent method which is described in Algorithm <ref> (where [v]_i,j = v_j if i = j and 0 if i≠ j). § SOME PROPERTIES OF THE NEHARI AND SHOOTING METHODS In this section, we discuss some properties of the methods that we have introduced. Our goal is to point out some of their strengths and weaknesses. In the case of the shooting method, observe that there is an inherent numerical difficulty associated with its practical implementation. Indeed, given k∈ℕ, the value of α_k can be determined only up to machine precision, i.e. 10^-16 in practice. This is limiting the size of the domain in x on which u(·;α_k) can be computed accurately, even assuming no error on the numerical resolution of the Cauchy problem (<ref>). Indeed, let >0 and define w_=u(·;α_k)-u(·;α_k+). Then w_ verifies -w_”+w_-f'(u(·;α_k))w_=O(w_^2). As lim_r→∞f'(u(r;α_k))=0, the linear part of the equation is given by -w_”+w_. Whenever w_ become small enough so that O(w_^2) becomes negligible, the dynamics of the equation of w_ becomes driven by the linear part, for which 0 is an exponentially unstable solution. As a consequence, we may have w_(x)∼ e^ x, which leads to w_∼ 1 after x∼ -ln() (after which nonlinear effects cannot be neglected any more). For = 10^-16, the best we can hope (assuming that the numerical method used to solve the ordinary differential equation is perfectly accurate) is therefore to solve our equation on an interval of length -ln()∼ 36. This is illustrated in Figure <ref>, on which we calculate the ground state with the shooting and Nehari methods in the case of the dimension d = 2 and for p = 3 when R = 100. We observe on the log-graph that at a distance from the origine around 19, the calculated solution starts to increase and goes far away from the expected solution (which is exponentially decreasing toward 0 at infinity). This issue is not observe in the case of the Nehari method due to the fact that we implement a Dirichlet boundary condition directly in the operator. This ensures that the numerical solution decreases properly to zero at the end of the domain. We now turn to the number of iterations required to compute a bound state. In the case of the shooting method, this number is naturally bounded by the maximal number of iterations used in the dichotomy. With a maximal precision set to be ε = 10^-16 and an initial interval of length 100 for the initial data, the number of iterations will always be lower than 18log_2(10) ≈ 60. The Nehari method does not benefit from such bound on its number of iterations. In Figure <ref>, we depict the number of iterations necessary to the computation of bound states with respect to their number of nodes for d = 2, p = 3, R = 30 and N = 2^12. In each case, we use the following initial data u_0(r) = cos(r)e^-r^2/30. We remark that this initial data has a large number of nodes and decreases rapidly to zero. By construction, the algorithm selects the desired number of nodes and the excess nodes are discarded. We can see that the number of iterations grows rapidly, making the Nehari method numerically costly compared to the shooting method. Finally, we investigate the convergence properties of these methods with respect to the number of discretization points. To do so, we compute bounds states in dimension d=2 and for p=3 on the interval [0,R], for R = 30, for different numbers of nodes with each method. The number of discretization points is set to be 2^N with N∈{8,9,10,11,12} and we compute the errors e_N^(1) : = u^(N) - u^(ref)_L^1 = h ∑_k = 1^2^N |u^(N)_k - u^(ref)_k| e_N^(∞) : = u^(N) - u^(ref)_L^∞ = sup_1≤ k≤ 2^N |u^(N)_k - u^(ref)_k|, for each N, where u^(ref) is the bound state computed with 2^15 discretization points. The results are depicted in Figure <ref> for the Nehari method and Figure <ref> for the shooting method. We can see that the order of convergence of the Nehari method depends on the number of nodes and does not seems to be a specific value. However, we can affirm that it is of order above 1. In the case of the shooting method, the conclusion is more straightforward since the order of convergence is clearly 1 regardless of the number of nodes. This is explained by the fact that the positions of the nodes are computed with an error of the order of the space discretization. We then perform a comparison between the bound state obtained by each method in the same configuration. That is, we compute the errors E_N^(1) : = u^(N,Nehari) - u^(N,Shooting)_L^1 = h ∑_k = 1^2^N |u^(N,Nehari)_k - u^(N,Shooting)_k| E_N^(∞) : = u^(N,Nehari) - u^(N,Shooting)_L^∞ = sup_1≤ k≤ 2^N |u^(N,Nehari)_k - u^(N,Shooting)_k|, for each N∈{8,9,10,11,12}, where u^(N,Nehari) (resp. u^(N,Shooting)) is the bound state obtained by the Nehari method (resp. the shooting method). The results can be observed in Figure <ref> where we can see that, no matter the number of nodes, both methods converge to the same bound state. In conclusion, we have studied two numerical methods to compute the bound states of the nonlinear Schrödinger equation in the radial case. The shooting method offers the advantage of being fast but the disadvantage of being less robust, whereas the Nehari method is robust but slow. Based on this observation, this suggest, for numerical experiments, to combine these two methods, that is, to make an initial approximation of the bound state using the shooting method and then refine it using the Nehari method to obtain the desired decay towards zero at infinity. § NUMERICAL EXPERIMENTS In this section, we present some results obtained by numerical experiments consisting in running first the shooting method and then take the outcome as initial data for the Nehari method. In Figure <ref>, we consider the case d=2 and p=3 and depict the relation between the number of nodes k of the bound state u_k and its initial value u_k(0). We fit the data points with a function k↦ a + b√(k) where a = 0.4841 (with 95 percent confidence bounds [0.4487,0.5194]) and b = 2.415 (with 95 percent confidence bounds [2.409,2.422]). We also studied the positions of the nodes depending on the bound state (that is depending on its total number of nodes). These positions are depicted in Figure <ref>. For each node, the position seems to follow a certain behavior that can be modeled by the function k↦ 1/√(ak+b). We illustrate the value of the coefficients a and b for each node in Figure <ref>. In Figure <ref>, we plot the positions and (absolute) values of the extrema of the bound states between two consecutive nodes. We observe that for large numbers of nodes in the bound state, the extrema tend to a constant value which is √(2). This can be explained by the fact that for large r the first derivative term in (<ref>) vanishes and the solution is close to a soliton of the one dimensional setting whose expression is known to be r↦√(2) sech(r). This is illustrated in Figure<ref> where we superimpose this soliton (adequately shifted) with the (absolute) value of the bound state between its last two nodes (with a total number of nodes equal to 60). § QUANTITATIVE DEFORMATION LEMMA In this appendix, we recall the quantitative Deformation Lemma used in the proof of Theorem <ref>. For X in a Banach space and S⊂ X, introduce the notation S_δ:={u∈ X | dist(u,S)≤δ}, and for φ:X→, c∈, define φ^c=φ^-1((-∞,c]). Let X be a Banach space, φ∈𝒞^1(X,), S⊂ X, c∈, ,δ>0 such that φ'(u)_X≥8/δ for all u∈φ^-1([c-2,c+2])∩ S_2δ, Then there exists η∈𝒞([0,1]× X,X) such that (i) η(t,u)=u if t=0 or if u∉φ^-1([c-2,c+2])∩ S_2δ, (ii) η(1,φ^c+∩ S)⊂φ^c-, (iii) η(t,·) is an homeomorphism of X for all t∈[0,1], (iv) η(tu)-u_X≤δ for all u∈ X and for all t∈[0,1], (v) φ(η(·,u)) is non increasing for all u∈ X, (vi) φ(η(t,u))<c for all u∈φ^c∩ S_δ and for all t∈(0,1]. abbrv
http://arxiv.org/abs/2307.00287v1
20230701095721
Null controllability of a kind of n-dimensional degenerate parabolic equation
[ "Hongli Sun", "Yuanhang Liu", "Weijia Wu", "Donghui Yang" ]
math.AP
[ "math.AP", "math.OC" ]
[2]*Corresponding author: [email protected] [email protected] [email protected] [email protected] [email protected] [2020]93B05 Null controllability of a kind of n-dimensional degenerate parabolic equation Hongli Sun^1, Yuanhang Liu^2, Weijia Wu^2,*, Donghui Yang^2 ^1 School of Mathematics, Physics and Big data, Chongqing University of Science and Technology, Chongging 401331, China ^2 School of Mathematics and Statistics, Central South University, Changsha 410083, China =================================================================================================================================================================================================================================================================================================== In this paper, we investigate a class of n-dimensional degenerate parabolic equations with abstract coefficients. Our focus is on improving the regularity of solutions and establishing Carleman estimates for these equations through the construction of specialized weight functions. Using these results, we demonstrate the null controllability of the corresponding equations. Additionally, we provide a specific example to illustrate the efficacy of our methodology. myheadings plain DEGENERATE PARABOLIC EQUATIONS WITH ABSTRACT COEFFICIENTSHONGLI SUN, WEIJIA WU AND DONGHUI YANG § INTRODUCTION Controllability is a fundamental concept in control theory that was first introduced by the renowned mathematician Kalman. It holds great importance in solving control problems within linear systems. The study of controllability for parabolic equations has a rich history spanning half a century, (see <cit.>), and can be categorized into two main branches: the controllability of non-degenerate parabolic equations and the controllability of degenerate parabolic equations. While there has been significant progress in analyzing the controllability of non-degenerate parabolic equations across various fields, research on the controllability of degenerate parabolic equations still remains relatively limited. The degenerate parabolic equation, which is a common class of diffusion equations, can describe numerous physical phenomena. For example, the famous Crocco equation is a degenerate parabolic equation, which reflects the compatibility relationship between the change in total energy and entropy in steady flow and vorticity. Tornadoes follow the Crocco equation during the rotation process, and thus the study of controllability and optimal control problems of the Crocco equation is of great significance in meteorology (see <cit.>). Similarly, the Black-Scholes equation (see <cit.>), which is widely studied in finance, and the Kolmogorov equation (see <cit.>), are also degenerate parabolic equations that have important practical applications in our real life. Therefore, the study of the control problems of degenerate parabolic equations is of great significance. In <cit.>, the authors introduced the concepts of regional null controllability and regional persistent null controllability, and proved the regional controllability for a Crocco type linearized equation and for the nondegenerate heat equation in unbounded domains. Furthermore, in subsequent studies <cit.>, the controllability of one-dimensional degenerate heat equations and other one-dimensional degenerate parabolic equations was demonstrated. After this, scholars have also investigated the problem of the controllability of high-dimensional degenerate equations, and the controllability of the Grushin-type operator has been extensively studied (see <cit.>). In <cit.>, The authors obtained the Carleman estimate of the two-dimensional Grushin-type operator by using the Fourier decomposition, and further obtained the null controllability. Besides, there are many new results on the controllability of other high-dimensional degenerate equations. In <cit.>, they presented an effective approach to establish the controllability of two-dimensional degenerate equations and to address optimal control problems. Importantly, in the context of two-dimensional equations, the validity of the divergence theorem becomes crucial for meaningful boundary integrals along the degenerate boundary of the control region. Consequently, the inclusion of two-dimensional weighted Hardy inequalities and trace operators on weighted spaces becomes necessary to handle degenerate boundaries. Inspired by the work of <cit.>, this paper focuses on a more general class of n-dimensional degenerate parabolic equations. By employing the Carleman estimate technique, we establish the null controllability of these equations, where the degenerate coefficient is considered as an abstract function, and the control region is located near the boundary. It is worth noting that our model exhibits significant differences from that of <cit.>. In <cit.>, they need to use Hardy inequality and trace theory to obtain the well-posed results and Carleman estimates, but in this paper, due to different assumptions, different control regions and different weight functions, we do not need Hardy inequality and trace theory, but use a new method to obtain our Carleman's estimate by “cutting off " the degenerate boundary. In other words, if we cut off the degenerate boundary, null controllability always holds. However, this is only one of the methods, and interior control is a further problem for us to consider. The remaining sections of this paper are structured as follows. In section 2, we present some preliminary results and demonstrate the well-posedness of problem (<ref>). In section 3, we provide Carleman estimates for the degenerate equation (<ref>) and present the main results of this paper. In order to enhance the understanding of our model, section 4 provides a specific example and discusses how to improve the internal regularity and obtain Carleman estimates for the corresponding equation (<ref>) to be formulated later. § MAIN RESULTS This section presents the main results of the paper, which are divided into two parts: Results in abstract form and Results in a specific example. §.§ Results in abstract form In this paper, we consider the following problem in a bounded domain Ω⊂ℝ^n with Lipschitz boundary: ∂_tz - ∑_i,j=1^n∂_x_i(A_ij∂_x_jz) + bz=χ_ωg, Q, z=0 A∇ z ·ν =0, Σ, z(0)= z_0, Ω. Here, Q:=Ω× (0,T) and Σ:= Γ×(0,T) denote the space-time domain and boundary, respectively. The set Ω̂⊂Ω is a nonempty open subset, and Ω_0 ⊂⊂Ω̂ is a nonempty open set with a smooth boundary. The set ω:=Ω\Ω_0 is defined as the complement of the closure of Ω_0, and χ_ω is the corresponding characteristic function. The control function g ∈ L^2(Q) and the initial data z_0 ∈ L^2(Ω) are given, while b∈ L^∞(Q) is a known function. We impose the following assumption on the matrix-valued function A=(A_ij(x))_i,j=1^n: The matrix-valued function A=(A_ij(x))_i,j=1^n is positive definite for all x ∈, but may vanish at a subset of ∂Ω, and A_ij=A_ji∈ C^2(), i,j=1,⋯, n, ρ|ξ|^2 ≤∑_i,j=1^n A_ij(x)ξ_iξ_j ≤Λ |ξ|^2, ∀ξ∈ℝ^n, ∀ x ∈, here 0<ρ≤Λ are two fixed constants. From Assumption <ref>, it is known that the equation (<ref>) is degenerate on a part of the boundary, but non-degenerate in the interior. We define the function spaces ℋ^1(Ω) and ℋ^2(Ω) as follows: ℋ^1() ={ z∈ L^2() |∇ z A ∇ z ∈ L^1() }, ℋ^2() ={ z∈ℋ^1() |(A∇ z) ∈ L^2() }. These spaces are equipped with the following scalar products: (z,v)_ℋ^1() :=∫_Ω zv dx + ∫_Ω∇ z A ∇ v dx, (z,v)_ℋ^2() :=∫_Ω zv dx + ∫_Ω∇ z A ∇ v dx + ∫_Ω(A∇ z)(A∇ v) dx. These spaces are endowed with the following norms: z _ℋ^1() = z _L^2() + ∇ z A ∇ z_L^1(), z _ℋ^2() = z _ℋ^1() + (A∇ z)_L^2(). It can be easily verified that (ℋ^1(Ω), (·,·)ℋ^1(Ω)) and (ℋ^2(Ω), (·,·)ℋ^2(Ω)) are inner product spaces, and (ℋ^1(Ω), |·|ℋ^1(Ω)) and (ℋ^2(Ω), |·|ℋ^2(Ω)) are Banach spaces. Let ℋ_0^1(Ω) denote the closure of C_0^∞(Ω) in the space ℋ^1(Ω). In other words, _0^1(Ω)=C_0^∞(Ω)^^1(Ω). We also introduce the operators (𝒜_1,D(𝒜_1)) and (𝒜_2,D(𝒜_2)) defined as follows: 𝒜_1 z= ∑_i,j=1^n∂_x_i(A_ij∂_x_jz), D(𝒜_1) = ℋ^2 ()∩_0^1(Ω) , 𝒜_2 z= ∑_i,j=1^n∂_x_i(A_ij∂_x_jz), D(𝒜_2) = { z∈ℋ^2 () | A∇ z ·ν = 0 } . Next, we will present some results related to the bilinear form q(·,·) associated with the operators 𝒜_1 and 𝒜_2. But before that, we introduce an assumption. When A∇ z ·ν =0, that is to say, in Newman's boundary condition, we assume that A∇ z ∈ (W^1,1())^n. It should be noted that this assumption is used to prove Lemma <ref>, which will be presented later. However, Assumption <ref> is not strictly necessary. Following a similar approach to the case in <cit.>, we can assume that A∇ z ∈ H^-1/2(Γ) and ν∈ H^1/2(Γ). For any g ∈ L^2(Q) and any z_0 ∈ L^2(Ω), there exists a unique solution z ∈ C^0([0,T];L^2(Ω)) ∩ L^2(0,T;ℋ_0^1(Ω)) to equation (<ref>). Moreover, there exists a positive constant C such that sup_t∈[0,T] z(t)_L^2(Q)^2 + ∫_0^T z(t)^2__0^1(Ω) d t ≤ C( z_0_L^2()^2 + g_L^2(Q)^2). For any g ∈ L^2(Q) and any z_0 ∈ L^2(Ω), there exists a unique solution z ∈ C^0([0,T];L^2(Ω)) ∩ L^2(0,T;ℋ^1(Ω)) to equation (<ref>) with homogeneous Neumann boundary conditions. Furthermore, there exists a positive constant C such that sup_t∈[0,T] z(t)_L^2(Q)^2 + ∫_0^T z(t)^2_^1(Ω) d t ≤ C( z_0_L^2()^2 + g_L^2(Q)^2). It is worth noting that by adding a gradient term c∇ z to the left-hand side of equation (<ref>) and imposing stronger conditions on the degenerate coefficient A_ij(x), similar well-posedness results can be obtained. Let us consider the adjoint equation corresponding to (<ref>): ∂_tw + (A∇ w) -bw=f, Q, w=0 A∇ w ·ν =0, Σ, w(x, y, T)= w_T, Ω. The main results of this paper are the Carleman inequality and observability inequality stated below. There exist positive constants C, s_0, λ_0 such that for any λ≥λ_0, s ≥ s_0, and any solution u to (<ref>), the following inequality holds: s^-1∬_Qξ^-1 (|u_t |^2 + |(A∇ u)|^2) dx dt +C s^2 λ^2 ∫_0^T ∫_\ωξ^2|u|^2dx d t +C∬_Q s^3 λ^4 ξ^3|A ∇η·∇η|^2|u|^2 dx dt + C s λ^2 ∬_Qξ|∇ u · A ∇η|^2 dx d t ≤ Ce^-s σ f^2 +Cs^3 λ^3 ∫_0^T ∫_ωξ^3|u|^2dx d t. By a standard argument, we can conclude the following theorem (see <cit.>). For a fixed T>0 and an open set ω⊂Ω as defined previously, assuming that (<ref>) holds, there exists a positive constant C>0 such that for any w_T ∈ L^2(Ω), the solution to (<ref>) satisfies ∫_Ω|w(x, 0)|^2 d x ≤ C ∬_ω×(0, T)|w|^2 d x d t. Using the duality between controllability and observability, proving controllability is equivalent to establishing an observability property for the adjoint system (<ref>) (see <cit.>). From this, we can deduce the null controllability of (<ref>). For a fixed T>0 and an open set ω⊂Ω as defined previously, assuming that (<ref>) holds, there exists a control g ∈ L^2(Q) such that the solution z of (<ref>) satisfies z(·, T)=0, in Ω. Moreover, there exists a constant C=C(T, ω)>0 such that g_L^2()≤ C|z_0| . §.§ Results in a specific example To illustrate the results of this paper, we consider a specific example of a two-dimensional degenerate parabolic equation: ∂_tz - (y^α_y∂_xxz +x^α_x∂_yyz )=χ_ωg, Q, z(x,y,t)=0 A∇ z ·ν =0, Σ, z(x, y, 0)= z_0(x, y), Ω, where α_x, α_y ∈ (0,2), Ω=(0,1)×(0,1), Γ:= ∂Ω, T>0, Q:=Ω× (0,T), Σ:= Γ×(0,T), ω⊂Ω is a nonempty open set, and χ_ω is the corresponding characteristic function. The control g ∈ L^2(Q) and z_0 ∈ L^2(Ω) is the initial data. The function space, inner product, and norm are defined as in equations (<ref>)-(<ref>), and the matrix-valued function A:Ω→ M_2× 2(ℝ) is given by [ y^α_y 0; 0 x^α_x ]. Let ℬ_1 and ℬ2 be operators defined as follows: ℬ_1 z= y^α_y∂_xxz +x^α_x∂_yyz, D(ℬ_1) = ℋ^2 ()∩_0^1(Ω) , ℬ_2 z= y^α_y∂_xxz +x^α_x∂_yyz, D(ℬ_2) = { z∈ℋ^2 () | B∇ z ·ν = 0 } . It can be easily verified that (ℬ_1,D(ℬ_1)) and (ℬ_2,D(ℬ_2)) are self-adjoint and dissipative operators with dense domain. Consequently, (ℬ_1,D(ℬ_1)) and (ℬ_2,D(ℬ_2)) serve as the infinitesimal generators of the strongly continuous semigroups e^tℬ_1 and e^tℬ_2, respectively. As a result, similar well-posedness results can be established as in Theorem <ref> and Theorem <ref>. It is worth noting that in this example, we can also apply the Galerkin method described in <cit.> to establish the existence of solutions in weakly degenerate cases. This is possible due to the compact embedding result derived in <cit.>. Once the existence of solutions is established, we can proceed to investigate the controllability of the equation (<ref>). The adjoint equation corresponding to (<ref>) is given by ∂_tw + (A∇ w)=f, Q, w(x,y,t)=0, A∇ w ·ν =0, Σ, w(x, y, T)= w_T, Ω. By employing Carleman estimates, we can derive similar results on null controlability as presented in Theorems <ref>, <ref>, and <ref>. § WELL-POSED RESULTS In this section, we investigate the well-posedness of (<ref>). By utilizing Assumption <ref>, we establish the following Green's formula. For (u,v)∈ D(𝒜1)×^1_0(Ω) or (u,v)∈ D(𝒜2)×^1(Ω), the following equality holds: ∫_∇ u · A ∇ v dx = -∫_ (A ∇ u) vdx. The proof follows directly from Assumption <ref> and the boundary conditions. The following results hold: (1) The injection i_1: D(𝒜_1)→ L^2() and i_2: D(𝒜2)→ L^2() are continuous with dense range. (2) The bilinear form q_1(u,v):= ∫_∇ u · A ∇ v dx, (u,v)∈ D(𝒜_1)×^1_0(Ω) and q_2(u,v):= ∫_∇ u · A ∇ v dx, (u,v)∈ D(𝒜_2)×^1(Ω), are continuous, positive, symmetric. (3) The operators (𝒜_1,D(𝒜_1)) and (𝒜_2,D(𝒜_2)) are self-adjoint and dissipative. Clearly, the continuity of i_1 and i_2 follows directly from the definition of D(𝒜_1) and D(𝒜_2). Consequently, since C_0^∞ () is dense in D(𝒜_1), it is also dense in L^2(). Moreover, since C_0^∞ ()⊂ℋ^2()⊂ L^2(), we have ℋ^2() is dense in L^2(). This establishes (1). Next, we prove (2). It is evident that q_1 and q_2 are symmetric, and q_1(u,u) ≥ 0 and q_2(u,u) ≥ 0 for all u. Furthermore, |q_1(u,v)|≤ u __0^1(Ω) v __0^1(Ω), |q_2(u,v)|≤ u _^1(Ω) v _^1(Ω). Hence, q_1 and q_2 are continuous. Now, let (u_1,v_1) ∈ D(𝒜_1)×^1_0(Ω) and (u_2,v_2)∈ D(𝒜_2)×^1(Ω). By applying Lemma <ref>, we have q_1(u_1,v_1)=-∫_𝒜_1 u_1 · v_1 d x, q_2(u_2,v_2)=-∫_𝒜_2 u_2 · v_2 d x. Furthermore, since (γ I - 𝒜_1)u_1 _L^2() =sup_v_1_L^2()≤ 1, v_1 ∈_0^1(Ω)∫_Ωγ v_1 u_1 - v_1 𝒜_1 u_1 dx ≥γ u_1 _L^2(), (γ I - 𝒜_2)u_2 _L^2() =sup_v_2_L^2()≤ 1, v_2 ∈^1(Ω)∫_Ωγ v_2 u_2 - v_2 𝒜_2 u_2 dx ≥γ u_2 _L^2(), and a similar inequality holds for (γ I - 𝒜_2)u_2, we conclude that 𝒜_1 and 𝒜_2 are self-adjoint and dissipative, as desired. Consequently, both 𝒜_1 and 𝒜_2 serve as the infinitesimal generators of the strongly continuous semigroups denoted by e^t𝒜_1 and e^t𝒜_2, respectively. Additionally, the family of operators in ℒ(L^2()) given by B(t)u := b(t,·)u, t∈ (0,T), u∈ L^2() can be regarded as a family of bounded perturbations of 𝒜_1 (resp. 𝒜_2). Consequently, utilizing standard techniques (see <cit.>), one can establish the validity of Theorem <ref> and Theorem <ref>. We have now demonstrated the existence of solutions to equation (<ref>), with the solution z ∈ C([0,T];L^2(Ω))∩ L^2(0,T;ℋ_0^1(Ω)). Our aim is to establish that the regularity of the solutions can be enhanced within the interior of Ω. By employing standard arguments found in <cit.>, we can establish the following result. For all solutions of equation (<ref>), it holds that A∇ u ∈ W^1,1(_0) and ∇ u A∇ u ∈ W^1,1(_0). § CARLEMAN ESITIMATES Let us now derive a Carleman estimate. Consider η∈ C_0^3(Ω) satisfying η(x) := =0, x ∈Ω\, >0, x ∈, and ∇η_L^2()≥ C >0 _0, ∇η=0 ∂. Define θ(t):=[t(T-t)]^-4, ξ(x, t):=θ(t) e^λ(8|η|_∞+η (x)), σ(x, t):=θ(t) e^10 λ|η|_∞-ξ(x, t). In what follows, C>0 represents a generic constant, and w denotes a solution of equation (<ref>). We can assume, using standard arguments, that w possesses sufficient regularity. Specifically, we consider w ∈ H^1(0,T; ℋ_0^1(Ω)) with homogeneous Dirichlet boundary conditions and w ∈ H^1(0,T; ℋ^1(Ω)) with homogeneous Neumann boundary conditions. Consider s>s_0>0 and introduce u=e^-sσ w. Then, the following properties hold for u: (i) u=∂ u/∂ x_i=0 at t=0 and t=T; (ii) u=0 or A∇ u ·ν =0 on Σ; (iii) If P_1 u:=u_t+s (u A ∇σ)+s ∇σ A ∇ u and P_2 u:=(A ∇ u)+s^2 u ∇σ A ∇σ+s σ_t u, then P_1 u+P_2 u=e^-s σ f. From item (iii), it follows that P_1 u^2+P_2 u^2+2(P_1 u, P_2 u)=e^-s σ f^2. Let us define (P_1 u, P_2 u)=I_1+⋯+I_4, where I_1:=((A ∇ u)+s^2 u ∇σ· A ∇σ+s σ_t u, u_t), I_2:=s^2(σ_t u, (u A ∇σ)+∇σ A ∇ u), I_3:=s^3( u ∇σ· A ∇σ, (u A ∇σ)+∇σ· A ∇ u), I_4:=s((A ∇ u), (u A ∇σ)+∇σ· A ∇ u). By differentiating with respect to the time variable t, we can improve the regularity of u, leading to u_t ∈ℋ_0^1(Ω). From Theorem <ref> and item (i), we have I_1= ∬_Q u_t (A∇ u) +s^2 u ∇σ· A∇σ u_t + sσ_t u u_t dx dt = ∫_ s^2 ∇σ A∇σ·1/2u^2 dx |_0^T + ∫_ sσ_t·1/2u^2 dx |_0^T+ ∬_Σ u_t A∇ u ·ν ds dt -∬_Q A∇ u ·∇ u_t dx dt -1/2∬_Q(sσ_t + s^2 ∇σ· A∇σ)_t u^2 dx dt = - 1/2∫_ A∇ u ·∇ u dx |_0^T -1/2∬_Q(sσ_t + s^2 ∇σ· A∇σ)_t u^2 dx dt = -1/2∬_Q(sσ_t + s^2 ∇σ· A∇σ)_t u^2 dx dt. Indeed, we have ∬_Σ u_t A∇ u ·ν ds dt=0 for the Dirichlet boundary condition since u_t ∈ℋ_0^1(Ω), and ∬_Σ u_t A∇ u ·ν ds dt=0 for the Neumann boundary condition since A∇ u·ν=0. Furthermore, we obtain I_2 = s^2 ∬_Q σ_t u ((u A ∇σ) + A∇ u ·∇σ) dx dt = s^2 ∬_Q σ_t u (( A ∇σ) + 2 A∇ u ·∇σ) dx dt = s^2 ∬_Q σ_t u ( A ∇σ) + ( A ∇σ) σ_t u^2 dx dt = s^2 ∬_Σσ_t u^2 A ∇σ·ν dsdt - s^2 ∬_Q (σ_t A ∇σ)u^2 dx dt + s^2 ∬_Q ( A ∇σ) σ_t u^2 dx dt = - s^2 ∬_Q ( A ∇σ) σ_t u^2 dx dt -s^2∬_Q A ∇σ·∇σ_t u^2 dx dt + s^2∬_Q ( A ∇σ) σ_t u^2 dx dt = -s^2∬_Q A ∇σ·∇σ_t u^2 dx dt. In the fourth equality, we can observe that s^2 ∬_Σσ_t u^2 A ∇σ·ν dsdt=0 for the Dirichlet boundary condition, as u ∈ℋ_0^1(Ω), and s^2 ∬_Σσ_t u^2 A ∇σ·ν dsdt=0 for the Neumann boundary condition, as ∇η=0 on ∂Ω. Similarly, in I_3 below, s^3 ∬_Σ u^2 (A ∇σ·∇σ) A ∇σ·ν ds dt=0 holds for the Dirichlet boundary condition due to u ∈ℋ_0^1(Ω), and s^3 ∬_Σ u^2 (A ∇σ·∇σ) A ∇σ·ν ds dt=0 holds for the Neumann boundary condition as ∇η=0 on ∂Ω. In Equation (<ref>), we have I_3 = s^3 ∬_Q u A ∇σ·∇σ ( (u A ∇σ) + A∇ u ·∇σ ) dx dt = s^3 ∬_Σ u^2 (A ∇σ·∇σ) A ∇σ·ν ds dt -s^3 ∬_Q u A ∇σ·∇ (u A ∇σ·∇σ ) dx dt + s^3 ∬_Q (A ∇σ·∇ u) ( A ∇σ·∇σ ) u dx dt = -s^3∬_Q A∇σ·∇ ( A∇σ·∇σ) u^2 dx dt. Similarly, for I_4, we have I_4= s ∬_Q (A ∇ u)( (u A ∇σ)+ A ∇ u ·∇σ) dx dt = s ∬_Q (A ∇ u)( A ∇ u ·∇σ+ u (A ∇σ)+ A ∇ u ·∇σ) dx dt = s ∬_Q (A ∇ u) u (A ∇σ)dx dt + 2s ∬_Q (A ∇ u) A ∇ u ·∇σ dx dt = s ∬_Σ u (A ∇σ )A ∇ u ·ν dsdt + 2s ∬_Σ (A ∇ u ·∇σ) A ∇ u ·ν ds dt -s ∬_Q A ∇ u ·∇( u (A ∇σ)) dx dt -2s ∬_Q A ∇ u·∇( A ∇ u ·∇σ) dx dt = -s ∬_Q A ∇ u ·∇ u (A ∇σ)dx dt-s ∬_Q u A ∇ u·∇ ((A ∇σ)) dx dt -2s ∬_Q A ∇ u·∇(A ∇ u ·∇σ) dx dt, here, in the forth equality, s ∬_Σ u (A ∇σ )A ∇ u ·ν dsdt=0 can be found in the Dirichlet boundary condition since u ∈ℋ_0^1() and s ∬_Σ u (A ∇σ )A ∇ u ·ν dsdt=0 in the Neumann boundary condition since ∇η=0 on ∂. Since -2s ∬_Q A ∇ u ·∇ (A∇ u ·∇σ) dx dt = -2s ∑_i=1^2∬_Q (A∇σ)_i A ∇ u ·∂/∂ x_i(∇ u) + (∇ u)_i A ∇ u ·∇ (A ∇σ)_i dx dt, where -2s ∑_i=1^2∬_Q (A∇σ)_i A ∇ u ·∂/∂ x_i(∇ u) dx dt = -s ∑_i=1^2∬_Q (A∇σ)_i [ ∂/∂ x_i (A ∇ u ·∇ u) - ∂ A/∂ x_i∇ u ·∇ u ] dx dt = -s ∬_Σ (A ∇ u ·∇ u)(A∇σ) ·ν dsdt + s∬_Q (A ∇ u ·∇ u ) (A∇σ) dx dt +s ∑_i=1^2∬_Q (A∇σ)_i ∂ A/∂ x_i∇ u ·∇ u dx dt = s ∬_Q A∇ u ·∇ u (A∇σ) dx dt + s ∑_i=1^2∬_Q (A∇σ)_i ∂ A/∂ x_i∇ u ·∇ u dx dt, here -s ∬_Σ (A ∇ u ·∇ u)(A∇σ) ·ν dsdt=0 on ∂ since ∇η=0 on ∂. Hence, we have I_4 = -s ∬_Q u A ∇ u·∇ ((A ∇σ))dx dt -2s ∑_i=1^2∬_Q (∇ u)_i A ∇ u ·∇ (A ∇σ)_i dx dt + s ∑_i=1^2∬_Q (A∇σ)_i ∂ A/∂ x_i∇ u ·∇ u dx dt. By combining (<ref>) to (<ref>), we can conclude that (P_1 u, P_2 u) = -s^3 ∬_Q A ∇σ·∇(∇σ A ∇σ)|u|^2 dx dt -2s ∑_i=1^2∬_Q (∇ u)_i A ∇ u ·∇ (A ∇σ)_i dx dt -2s^2 ∬_Q ∇σ A ∇σ_t|u|^2 dx dt+ s ∑_i=1^2∬_Q (A∇σ)_i ∂ A/∂ x_i∇ u ·∇ u dx dt -s ∬_Q u A ∇ u ∇((A ∇σ)) dx dt-s/2∬_Q σ_t t|u|^2 dx dt. Let us denote the seven integrals on the right-hand side of (<ref>) by T_1, ⋯, T_6. Now, we will estimate each of them. Using the definitions of σ and ξ and the properties of η, we have ∇σ = - λξ∇η, ∇ξ = λξ∇η, A ∇σ·∇σ = λ^2 ξ^2 A∇η·∇η, and ∇( A ∇σ·∇σ) = λ^2 ξ^2 ∇ (A∇η·∇η) + 2λ^2 ξ^2 ∇η| A ∇η·∇η|. Therefore, we can rewrite T_1 as follows: T_1 =-s^3 ∬_Q A ∇σ∇(∇σ A ∇σ)|u|^2 dx dt =-s^3 ∬_Q ( -λξ A ∇η) ( λ^2ξ^2 ∇(∇η A ∇η) + 2λ^3ξ^2 ∇η| A ∇η·∇η| ^2) |u|^2 dx dt =2s^3 λ^4 ∬_Q ξ^3|A ∇η·∇η|^2|u|^2 dx dt + s^3 λ^3 ∬_Q ξ^3 A∇η·∇(A∇η·∇η)|u|^2 dx dt. Since A∇η·∇(A∇η·∇η) is bounded in Ω, we have s^3 λ^3 ∫_0^T∫_ωξ^3 A∇η·∇(A∇η·∇η)|u|^2 dx dt ≥ -C s^3 λ^3 ∫_0^T∫_ωξ^3 |u|^2 dx dt, and |A ∇η·∇η|≥ C>0 in Ω∖ω. Thus, we have s^3 λ^3 ∫_0^T∫_\ωξ^3 A∇η·∇(A∇η·∇η)|u|^2 dx dt ≥ -C s^3 λ^3 ∫_0^T∫_\ωξ^3 |u|^2 dx dt ≥ -C s^3 λ^3 ∫_0^T∫_\ωξ^3 |A ∇η·∇η|^2 |u|^2 dx dt ≥ -C s^3 λ^3 ∬_Qξ^3 |A ∇η·∇η|^2 |u|^2 dx dt. Hence, we can rewrite T_1 as: T_1 ≥ C s^3 λ^4 ∬_Q ξ^3|A ∇η·∇η|^2|u|^2 dx dt-C s^3 λ^3∫_0^T ∫_ωξ^3|u|^2 dx dt. Similarly, for T_2, we have: T_2 = -2s ∑_i=1^2∬_Q (∇ u)_i A ∇ u ·∇ (A ∇σ)_i dx dt = -2s ∬_Q( D(A∇σ) A∇ u) ·∇ u dx dt = 2s ∬_Q( λ^2 ξ (A∇η·∇ u) A∇η + λξ D(A∇η) A∇ u ) ·∇ u dx dt = 2s λ^2∬_Qξ| A∇η·∇ u | ^2 dx dt + 2sλ∬_Qξ D(A∇η) A∇ u ·∇ u dx dt ≥ Cs λ^2∬_Qξ| A∇η·∇ u | ^2 dx dt - C sλ∬_Qξ A∇ u ·∇ u dx dt. Furthermore, for T_3, as can be seen from the definition of ξ, we have ξξ_t ≤ξ^3, thus T_3 =-2 s^2 ∬_Q A∇σ·∇σ_t u^2 dx dt =-2 s^2 λ^2 ∬_Qξξ_t |A ∇η·∇η| u^2 dx dt ≥-2 s^2 λ^2 ∬_Qξ^3 |A ∇η·∇η|^2 u^2 dx dt. Similarly, we can express T_3 as: T_3 ≥ -C s^2 λ^2 ∬_Qξ^3 |A ∇η·∇η|^2 u^2 dx dt -C s^2 λ^2 ∫_0^T∫_ωξ^3 u^2 dx dt, and T_4 as: T_4 = s ∬_Q (A∇σ)_i ∂ A/∂ x_i∇ u ·∇ u dx dt ≥ -C s λ∬_Q ξ A ∇ u ·∇ u dx dt. Furthermore, by utilizing the definitions of σ and ξ, we obtain: T_5 = -s ∬_Q u A ∇ u ·∇((A ∇σ)) dx dt = s λ^3 ∬_Q ξ u A ∇ u ·∇η( A ∇η·∇η) dx dt + s λ^2 ∬_Q ξ u A ∇ u ·∇(A ∇η·∇η) dx dt +s λ^2 ∬_Q ξ u A ∇ u ·∇η(A ∇η) dx dt+s λ∬_Q ξ u A ∇ u ·∇((A ∇η)) dx dt. Let us denote T_51, ⋯, T_54 as the seven integrals on the right-hand side of (<ref>). Then we have: T_51= s λ^3 ∬_Q ξ u A ∇ u ·∇η( A ∇η·∇η) dx dt ≥ -Cs^2 λ^4 ∬_Q ξ|A ∇η·∇η|^2|u|^2 dx dt -Cλ^2 ∬_Q ξ| A∇ u ·∇η|^2 dx dt, and T_52= s λ^2 ∬_Q ξ u A ∇ u ·∇(A ∇η·∇η) dx dt ≥ -Cs^2 λ^3∬_Qξ| A ∇η·∇η| u^2 dx dt -Cs^2 λ^3∫_0^T ∫_ωξ| A ∇ u ·∇ u | u^2 dx dt - Cλ∬_Q ξ A ∇ u ·∇ u dx dt. Then we have T_53= s λ^2 ∬_Q ξ u A ∇ u ·∇η(A ∇η) dx dt ≥ -Cs^2 λ^3∬_Q ξ| A ∇η·∇η|^2 u^2 dx dt -Cs^2 λ^3∫_0^T ∫_ωξ u^2 dx dt - Cλ∬_Q ξ A ∇ u ·∇ u dx dt, and T_54= s λ∬_Q ξ u A ∇ u ∇((A ∇η))dx d t ≥ -C s^2 λ∬_Q ξ|A ∇η·∇η|^2 u^2dx d t-C s^2 λ∫_0^T ∫_ωξ u^2dx d t -C λ∬_Q ξ A ∇ u ·∇ udx d t. Combining (<ref>) to (<ref>), we obtain: T_5 ≥ -Cs^2 λ^4 ∬_Q ξ|A ∇η·∇η|^2 |u|^2dx d t -Cλ^2∬_Qξ| A∇ u·∇η| ^2 dx dt -C s^2 λ^3 ∫_0^T ∫_ωξ|u|^2dx d t- Cλ∬_Q ξ A ∇ u ·∇ u dx d t. Finally, as we can observe from the definitions of ξ and σ, we have σ_tt≤ξ^3/2. Hence, T_6 =-s/2∬_Q σ_t t|u|^2dx d t ≥-C s ∬_Q ξ^3 / 2|u|^2dx d t. From equations (<ref>)-(<ref>), (<ref>), and (<ref>), we deduce the following inequality: (P_1 u, P_2 u) ≥ C∬_Q s^3 λ^4 ξ^3 |∇η· A ∇η|^2 |u|^2 dx dt + C s λ^2 ∬_Qξ|∇ u · A ∇η|^2 dx d t -Cs^3 λ^3 ∫_0^T ∫_ωξ^3|u|^2dx d t - Cs λ∬_Q ξ A ∇ u ·∇ udx d t - Cs ∬_Q ξ^3 / 2|u|^2dx d t. Clearly, s ∬_Q ξ^3 / 2|u|^2dx dt can be absorbed by other terms. Combining (<ref>) and (<ref>), we can conclude that P_1 u^2+P_2 u^2+C∬_Q s^3 λ^4 ξ^3|A ∇η·∇η|^2|u|^2 dx dt + C s λ^2 ∬_Qξ|∇ u · A ∇η|^2 dx d t ≤ Ce^-s σ f^2 +Cs λ∬_Q ξ A ∇ u ·∇ udx d t+Cs^3 λ^3 ∫_0^T ∫_ωξ^3|u|^2dx d t. Furthermore, we can deduce that s^2 λ^2 ∬_Q ξ^2|u|^2dx d t=s^2 λ^2 ∫_0^T ∫_\ωξ^2|u|^2dx d t + s^2 λ^2 ∫_0^T ∫_ωξ^2|u|^2dx d t. Clearly, the second term on the right in (<ref>) can be absorbed by the other terms. Let us now consider the first term on the right, where we have s^2 λ^2 ∫_0^T ∫_\ωξ^2|u|^2dx d t ≤ Cs^2 λ^2 ∫_0^T ∫_\ωξ^2|A ∇η·∇η|^2|u|^2dx d t ≤ Cs^2 λ^2 ∬_Q ξ^2|A ∇η·∇η|^2|u|^2dx d t. Therefore, we have P_1 u^2+P_2 u^2+C s^2 λ^2 ∫_0^T ∫_\ωξ^2|u|^2dx d t + C∬_Q s^3 λ^4 ξ^3|A ∇η·∇η|^2|u|^2 dx dt + C s λ^2 ∬_Qξ|∇ u · A ∇η|^2 dx d t ≤ Ce^-s σ f^2 +Cs λ∬_Q ξ A ∇ u ·∇ udx d t+Cs^3 λ^3 ∫_0^T ∫_ωξ^3|u|^2dx d t. Using the definitions of P_1 u and P_2 u, we can observe that s^-1∬_Q ξ^-1|u_t|^2dx d t = s^-1∬_Q ξ^-1(P_1 u -s u (A∇σ)- 2s ∇ u · A∇σ)^2dx d t ≤ Cs^-1P_1 u^2+s ∬_Q ξ^-1|u|^2|(A ∇σ)|^2dx d t +C s ∬_Q ξ^-1|∇ u A ∇σ|^2dx d t ≤ Cs^-1P_1 u^2+C s λ^4 ∬_Q ξ| A ∇η·∇η|^2 |u|^2dx d t +C s λ^2 ∬_Q ξ|u|^2dx d t+C s λ^2 ∬_Q ξ|∇ u · A ∇η|^2dx d t, and s^-1∬_Q ξ^-1|(A∇ u)|^2dx d t = s^-1∬_Q ξ^-1(P_2 u -s^2 u ∇σ· A ∇σ - s σ_t u)^2dx d t ≤ C s^-1P_2 u^2+s^3 ∬_Q ξ^-1|u|^2|∇σ· A ∇σ|^2dx d t +C s ∬_Q ξ^-1|σ_t|^2|u|^2dx d t ≤ C s^-1P_2 u^2+C s^3 λ^4 ∬_Q ξ^3|∇η· A ∇η|^2|u|^2dx d t +C s ∬_Q ξ^2|u|^2dx d t. From equation (<ref>), we obtain the inequality s^-1∬_Qξ^-1 (|u_t |^2 + |(A∇ u)|^2) dx dt +C s^2 λ^2 ∫_0^T ∫_\ωξ^2|u|^2dx d t +C∬_Q s^3 λ^4 ξ^3|A ∇η·∇η|^2|u|^2 dx dt + C s λ^2 ∬_Qξ|∇ u · A ∇η|^2 dx d t ≤ Ce^-s σ f^2 +Cs λ∬_Q ξ A ∇ u ·∇ udx d t+Cs^3 λ^3 ∫_0^T ∫_ωξ^3|u|^2dx d t. Considering the term s λ∬_Q ξ A ∇ u ·∇ udx d t, we have s λ∬_Q ξ A ∇ u ·∇ udx d t= s λ∬_Σξ u A ∇ u ·ν d s d t - s λ∬_Q (ξ A∇ u) udx d t = - s λ∬_Q (ξ A∇ u) udx d t. Then, we can further simplify the expression as follows: - s λ∬_Q (ξ A∇ u) udx d t = - s λ^2 ∬_Q ξ u∇η· A∇ u dx d t - s λ∬_Q ξ( A∇ u) udx d t ≤ C λ^2 ∬_Q ξ| ∇η· A∇ u | ^2dx d t + Cs^2 λ^2 ∬_Q ξ u^2dx d t +C s^-1λ^-1∬_Q ξ^-1| ( A∇ u)| ^2dx d t + Cs^3 λ^3 ∬_Q ξ^3 u^2dx d t. Thus, the term s λ∬_Q ξ A ∇ u ·∇ udx d t can be absorbed by the other terms. This implies that: s^-1∬_Qξ^-1 (|u_t |^2 + |(A∇ u)|^2) dx dt +C s^2 λ^2 ∫_0^T ∫_\ωξ^2|u|^2dx d t +C∬_Q s^3 λ^4 ξ^3|A ∇η·∇η|^2|u|^2 dx dt + C s λ^2 ∬_Qξ|∇ u · A ∇η|^2 dx d t ≤ Ce^-s σ f^2 +Cs^3 λ^3 ∫_0^T ∫_ωξ^3|u|^2dx d t. By employing classical arguments, we can then revert back to the original variable w and conclude the result. § A SPECIFIC EXAMPLE As stated earlier in this paper, the well-posedness of this example can be easily verified. In the following, we provide some results regarding the improvement of interior regularity and Carleman estimates. §.§ Internal regularity improvement Having established the existence of solutions to equation (<ref>), we now focus on demonstrating that the regularity of the solutions to (<ref>) can be enhanced within the interior of Ω. Specifically, we can achieve improved regularity by differentiating with respect to the time variable t, resulting in u_t ∈ℋ_0^1(Ω). For all the solution of equation (<ref>), one has y^α_y|∂_xy w|^2, y^α_y|∂_xx w|^2, x^α_x|∂_xy w|^2, and x^α_x|∂_yy w|^2 ∈ L^1_ loc(). Let us introduce a function ψ = ψ(x,y) satisfying the following properties: ψ∈ C_0^∞ (), ψ =1 ^2ϵ, ψ =0 _ϵ, 0≤ψ≤ 1, where Ω^2ϵ := (2ϵ, 1-2ϵ) × (2ϵ, 1-2ϵ) and Ω_ϵ := Ω∖Ω^ϵ. Let us define the operators as follows: D_x^h (w):= w(x+h,y)-w(x,y)/h, v:= D_x^-h(ψ^2 D_x^h (w)). Multiplying the equation in (<ref>) by v and integrating over Ω, we obtain ∫_^ϵ v w_t dxdy + ∫_^ϵ v (A∇ w) dxdy =∫_^ϵ v f dxdy. Next, let us consider the second term on the left-hand side: ∫_^ϵ v (A∇ w) dxdy=∫_^ϵ D_x^-h(ψ^2 D_x^h (w)) (A∇ w) dxdy = -∫_^ϵ∇ (D_x^-h(ψ^2 D_x^h (w))) · A∇ w dxdy = -∫_^ϵ D_x^-h (∇(ψ^2 D_x^h (w))) · A∇ w dxdy = ∫_^ϵ (∇(ψ^2 D_x^h (w))) D_x^h(A∇ w) dxdy =∫_^ϵ (2ψ∇ψ D_x^h (w)+ ψ^2 ∇ (D_x^h (w)) D_x^h(A∇ w) dxdy = ∫_^ϵ 2ψ∇ψ D_x^h (w) D_x^h(A∇ w) dxdy + ∫_^ϵψ^2 ∇ (D_x^h (w)) D_x^h(A∇ w) dxdy = ∫_^ϵ 2ψ∇ψ D_x^h (w) D_x^h(A∇ w) dxdy+∫_^ϵψ^2 ∇ (D_x^h (w)) D_x^h(A) ∇ w dxdy + ∫_^ϵψ^2 ∇ (D_x^h (w)) A^h ∇ D_x^h (w) dxdy, where A^h := [ y^α_y 0; 0 (x+h)^α_x ], and ∫_^ϵψ^2 ∇ (D_x^h (w)) A^h ∇ D_x^h (w) dxdy = ∫_^ϵψ^2 ( y^α_y |∂_x (D_x^h (w))|^2 + (x+h)^α_x |∂_y (D_x^h (w))|^2) dxdy = ∫_^ϵ D_x^-h(ψ^2 D_x^h (w)) f dxdy - ∫_^ϵ D_x^-h(ψ^2 D_x^h (w)) w_t dxdy -∫_^ϵψ^2 ∇ (D_x^h (w)) D_x^h(A) ∇ w dxdy-∫_^ϵ 2ψ∇ψ D_x^h (w) D_x^h(A∇ w) dxdy. We can now proceed to estimate the terms on the right-hand side. Since we can assume f ∈ H^1(Ω), we have ∫_^ϵ D_x^-h(ψ^2 D_x^h (w)) f dxdy = -∫_^ϵψ^2 D_x^h (w) D_x^h (f) dxdy ≤∫_^ϵψ^2 D_x^h (w) D_x^h (f) dxdy≤ C f_H^1(^ϵ) + C w_ℋ_0^1(^ϵ). Similarly, we obtain ∫_^ϵ D_x^-h(ψ^2 D_x^h (w)) w_t dxdy ≤ C w_ℋ_0^1(^ϵ) + C w_t_ℋ_0^1(^ϵ), and -∫_^ϵψ^2 ∇ (D_x^h (w)) D_x^h(A) ∇ w dxdy ≤ C∫_^ϵψ^2 ∂_y (D_x^h (w)) ∂_y w dxdy ≤ γ∫_^ϵψ^2 x^α_x |∂_y (D_x^h (w))|^2 dxdy + C/γ w_ℋ_0^1(^ϵ). Finally, we estimate the last term as follows: -∫_^ϵ 2ψ∇ψ D_x^h (w) D_x^h(A∇ w) dxdy = -∫_^ϵ 2ψ∇ψ D_x^h (w) D_x^h(A)∇ w dxdy -∫_^ϵ 2ψ∇ψ D_x^h (w) D_x^h(∇ w)A^h dxdy ≤ γ∫_^ϵψ^2 x^α_x |∂_y (D_x^h (w))|^2 dxdy +γ∫_^ϵψ^2 ∇ (D_x^h (w)) A^h ∇ D_x^h (w) dxdy+ C/γ w_ℋ_0^1(^ϵ). Obviously, the first and second terms on the right-hand side can be absorbed by equation (<ref>). Thus, we have ∫_^ϵψ^2 ∇ (D_x^h (w)) A∇ D_x^h (w) dxdy≤ C∫_^ϵψ^2 ∇ (D_x^h (w)) A^h ∇ D_x^h (w) dxdy≤ C. Similarly, we obtain ∫_^ϵψ^2 ∇ (D_y^h (w)) A∇ D_y^h (w) dxdy ≤ C. This concludes the proof. It is worth noting that, on the boundary, we cannot improve the regularity due to the specific nature of the equation we are considering. Consequently, on the boundary, we do not have the property (A ∇ u ·∇σ) A ∇ u, (A∇ u·∇ u)A∇σ∈ (W^1,1(Ω))^2 (see the estimate of I_4 for the boundary term in the following section). In other words, the functions (A ∇ u ·∇σ) A ∇ u and (A∇ u·∇ u)A∇σ do not have a trace on ∂Ω. This distinction shapes our research approach significantly differently from <cit.>. It is also the reason why we choose the control domain as ω_0 in the subsequent subsection. §.§ Carleman esitimates The control system (<ref>) we are studying is a specific case of the previously mentioned (<ref>). However, the weight function we have chosen in the following analysis is a special weight function that allows for more convenient calculations. Therefore, we need to provide a Carleman estimate that differs slightly from the previous one. Let us now establish a Carleman estimate. We know that the adjoint equation of (<ref>) is given by ∂_tw + (A∇ w)=f, Q, w(x,y,t)=0, A∇ w ·ν =0, Σ, w(x, y, T)= w_T, Ω. Fix δ>0. Denote (δ,1)×(δ,1) by Ω^δ, Ω\Ω^δ by Ω_δ, and Σ_δ:= ∂Ω^δ×(0,T). Let η(x,y) := (x-δ)^2 (y-δ)^2 (x-1)^2(y-1)^2/2, x ∈Ω^δ, 0, x ∈Ω_δ. It is evident that η∈ C^2(Ω), η > 0 in Ω^δ, and η = 0 on ∂Ω^δ. By utilizing the classical arguments in <cit.>, we can transform (1+δ/2,1+δ/2) to ω_0. As a result, we obtain | ∇η| ≥ C > 0, Ω\ω_0, where ω_0 is a nonempty open set satisfying Ω_δ⊂ω_0 ⊂Ω, and Γ⊂∂ω_0. We define θ(t):=[t(T-t)]^-4, ξ(x, t):=θ(t) e^λ(8|η|_∞+η (x)), σ(x, t):=θ(t) e^10 λ|η|_∞-ξ(x, t). In the following discussion, C>0 represents a generic constant that depends solely on T and α_x, α_y. We assume that w is a sufficiently regular solution of (<ref>). Moreover, we consider w ∈ H^1(0,T; ℋ_0^1(Ω)). For s>s_0>0, we introduce u=e^-sσ w. Then, the following properties hold: (i) u=∂ u/∂ x_i=0 at t=0 and t=T; (ii) u=0 or A∇ u ·ν =0 on Σ; (iii) If P_1 u:=u_t+s div(u A ∇σ)+s ∇σ A ∇ u and P_2 u:=div(A ∇ u)+s^2 u ∇σ A ∇σ+s σ_t u, then P_1 u+P_2 u=e^-s σ f. From item (iii), it follows that P_1 u^2+P_2 u^2+2(P_1 u, P_2 u)=e^-s σ f^2. We decompose (P_1 u, P_2 u) into four parts: I_1, I_2, I_3, and I_4, defined as follows: I_1:=((A ∇ u)+s^2 u ∇σ· A ∇σ+s σ_t u, u_t), I_2:=s^2(σ_t u, (u A ∇σ)+∇σ A ∇ u), I_3:=s^3( u ∇σ· A ∇σ, (u A ∇σ)+∇σ· A ∇ u), I_4:=s((A ∇ u), (u A ∇σ)+∇σ· A ∇ u). Before proceeding with the calculations, we introduce the following theorem, which will be useful. For all v∈ℋ_0^1(Ω), it holds that ∫_ v (A∇ u) dx dy= -∫_∇ v · A∇ u dx dy. Let v_n ∈ C_0^∞(Ω) be a sequence converging to v in H^1(Ω), which is possible due to the density of C_0^∞(Ω) in ℋ_0^1(Ω). Then, we have ∫_ v (A∇ u) dx dy = lim_n→∞∫_ v_n (A∇ u) dx dy=lim_n→∞( v_n,(A∇ u)) =-lim_n→∞( ∇ v_n, A∇ u). Using integration by parts, we expand the above expression as follows: -lim_n→∞( ∇ v_n, A∇ u)= lim_n→∞-( ∂ v_n/∂ x, y^α_y∂_x u)-( ∂ v_n/∂ y, x^α_x∂_y u) = lim_n→∞-(y^α_y/2∂ v_n/∂ x, y^α_y/2∂_x u)-(x^α_x/2∂ v_n/∂ y, x^α_x/2∂_y u) = lim_n→∞-∫_ y^α_y∂ v_n/∂ x∂ u/∂ x dx dy - ∫_ x^α_x∂ v_n/∂ y∂ u/∂ y dx dy = lim_n→∞-∫_∇ v_n A ∇ u dx dy =-∫_∇ v A ∇ u dx dy. Thus, the result is established. From Theorem <ref> and item (i), we obtain the following expressions: I_1= ∬_Q u_t (A∇ u) +s^2 u ∇σ· A∇σ u_t + sσ_t u u_t dx dy dt = ∫_ s^2 ∇σ A∇σ·1/2u^2 |_0^T dx dy + ∫_ sσ_t·1/2u^2 |_0^T dx dy + ∬_Σ u_t A∇ u ·ν ds dt -∬_Q A∇ u ·∇ u_t dx dy dt -1/2∬_Q(sσ_t + s^2 ∇σ· A∇σ)_t u^2 dx dy dt = - 1/2∫_ A∇ u ·∇ u |_0^T dx dy -1/2∬_Q(sσ_t + s^2 ∇σ· A∇σ)_t u^2 dx dy dt = -1/2∬_Q(sσ_t + s^2 ∇σ· A∇σ)_t u^2 dx dy dt, and I_2 = s^2 ∬_Q σ_t u ((u A ∇σ) + A∇ u ·∇σ) dx dy dt = s^2 ∬_Q σ_t u (( A ∇σ) + 2 A∇ u ·∇σ) dx dy dt = s^2 ∬_Q σ_t u ( A ∇σ) + ( A ∇σ) σ_t u^2 dx dy dt = s^2 ∬_Σσ_t u^2 A ∇σ·ν dsdt - s^2 ∬_Q (σ_t A ∇σ)u^2 dx dydt + s^2 ∬_Q ( A ∇σ) σ_t u^2 dx dy dt = - s^2 ∬_Q ( A ∇σ) σ_t u^2 dx dy dt -∬_Q A ∇σ·∇σ_t u^2 dx dy dt+ s^2∬_Q ( A ∇σ) σ_t u^2 dx dydt = -s^2∬_Q A ∇σ·∇σ_t u^2 dx dy dt. Similarly, we can deduce the following expressions: I_3 = s^3 ∬_Q u A ∇σ·∇σ ( (u A ∇σ) + A∇ u ·∇σ ) dx dy dt = s^3 ∬_Σ u^2 (A ∇σ·∇σ) A ∇σ·ν ds dt -s^3 ∬_Q u A ∇σ·∇ (u A ∇σ·∇σ ) dx dy dt + s^3 ∬_Q (A ∇σ·∇ u) ( A ∇σ·∇σ ) u dx dy dt = -s^3∬_Q A∇σ·∇ ( A∇σ·∇σ) u^2 dx dy dt, and I_4= s ∬_Q (A ∇ u)( (u A ∇σ)+ A ∇ u ·∇σ) dx dy dt = s ∬_Q (A ∇ u)( A ∇ u ·∇σ+ u (A ∇σ)+ A ∇ u ·∇σ) dx dy dt = s ∬_Q (A ∇ u) u (A ∇σ)dx dy dt + 2s ∬_Q (A ∇ u) A ∇ u ·∇σ dx dy dt = s ∬_Σ u (A ∇σ )A ∇ u ·ν dsdt + 2s ∬_Σ_δ (A ∇ u ·∇σ) A ∇ u ·ν_δ ds dt -s ∬_Q A ∇ u ·∇( u (A ∇σ)) dx dy dt -2s ∬_Ω^δ A ∇ u·∇( A ∇ u ·∇σ) dx dy dt = -s ∬_Q A ∇ u ·∇ u (A ∇σ)dx dy dt-s ∬_Q u A ∇ u·∇ ((A ∇σ)) dx dy dt -2s ∬_Q A ∇ u·∇(A ∇ u ·∇σ) dx dy dt. We observe that 2s ∬_Σ_δ (A ∇ u ·∇σ) A ∇ u ·ν_δ ds dt = 0 since ∇σ = 0 on Σ_δ. Moreover, -2s ∬_Q A ∇ u ·∇ (A∇ u ·∇σ) dx dydt = -2s ∑_i=1^2∬_Q (A∇σ)_i A ∇ u ·∂/∂ x_i(∇ u) + (∇ u)_i A ∇ u ·∇ (A ∇σ)_i dx dydt = -2s ∬_Q A ∇ u [ y^α_y∂σ/∂ x∇∂ u/∂ x + x^α_x∂σ/∂ y∇∂ u/∂ y + ∂ u/∂ x∇ (y^α_y∂σ/∂ x) + ∂ u/∂ y∇ (x^α_x∂σ/∂ y) ] dx dydt. Here, we have -2s ∬_Q A ∇ u [ y^α_y∂σ/∂ x∇∂ u/∂ x + x^α_x∂σ/∂ y∇∂ u/∂ y] dx dydt = -2s ∬_Q y^α_y∂σ/∂ x A ∇ u ·∂/∂ x(∇ u) dx dydt - 2s ∬_Q x^α_x∂σ/∂ y A ∇ u ·∂/∂ y(∇ u) dx dydt = -s ∬_Q y^α_y∂σ/∂ x[ ∂/∂ x (A ∇ u ·∇ u) - ∂ A/∂ x∇ u ·∇ u ] dx dydt -s ∬_Q x^α_x∂σ/∂ y[ ∂/∂ y (A ∇ u ·∇ u) - ∂ A/∂ y∇ u ·∇ u ] dx dydt = -s ∬_Σ_δ (A ∇ u ·∇ u)(y^α_y∂σ/∂ x) ·ν_x dsdt + s∬_Q A ∇ u ·∇ u ∂/∂ x (y^α_y∂σ/∂ x) dx dydt +s ∬_Q y^α_y∂σ/∂ x∂ A/∂ x∇ u ·∇ u dx dydt -s ∬_Σ_δ (A ∇ u ·∇ u)(x^α_x∂σ/∂ y) ·ν_y dsdt + s∬_Q A ∇ u ·∇ u ∂/∂ y (x^α_x∂σ/∂ y) dx dydt +s ∬_Q x^α_x∂σ/∂ y∂ A/∂ y∇ u ·∇ u dx dydt = s ∬_Q A∇ u ·∇ u (A∇σ) dx dydt + s∬_Q y^α_y∂σ/∂ x (α_x x^α_x-1| ∂ u/∂ y| ^2) dx dydt + s∬_Q x^α_x∂σ/∂ y (α_y y^α_y-1| ∂ u/∂ x| ^2) dx dydt. Hence, we obtain the following expression: I_4 = -s ∬_Q u A ∇ u·∇ ((A ∇σ))dx dy dt -2s ∬_Q A∇ u ·[ ∂ u/∂ x∇ (y^α_y∂σ/∂ x) + ∂ u/∂ y∇ (x^α_x∂σ/∂ y) ] dx dy dt + s∬_Q y^α_y∂σ/∂ x (α_x x^α_x-1| ∂ u/∂ y| ^2) dx dydt + s∬_Q x^α_x∂σ/∂ y (α_y y^α_y-1| ∂ u/∂ x| ^2) dx dydt. From equations (<ref>) to (<ref>), we conclude that (P_1 u, P_2 u) = -s^3 ∬_Q A ∇σ·∇(∇σ A ∇σ)|u|^2 dx dy dt -2s ∬_Q A∇ u ·[ ∂ u/∂ x∇ (y^α_y∂σ/∂ x) + ∂ u/∂ y∇ (x^α_x∂σ/∂ y) ] dx dy dt -2s^2 ∬_Q ∇σ A ∇σ_t|u|^2 dx dy dt+ s∬_Q y^α_y∂σ/∂ x( α_x x^α_x-1| ∂ u/∂ y| ^2) dx dydt + s∬_Q x^α_x∂σ/∂ y( α_y y^α_y-1| ∂ u/∂ x| ^2) dx dydt -s ∬_Q u A ∇ u ∇((A ∇σ)) dx dy dt -s/2∬_Q σ_t t|u|^2 dx dy dt. Let us denote the seven integrals on the right-hand side of equation (<ref>) as J_1, ⋯, J_6. Our goal now is to estimate each of these integrals. By utilizing the definitions of σ and ξ, as well as the properties of η, we can derive the following relationships: ∇σ = - λξ∇η, ∇ξ = λξ∇η, A ∇σ·∇σ = λ^2 ξ^2 A∇η·∇η, and ∇( A ∇σ·∇σ) = λ^2 ξ^2 ∇ (A∇η·∇η) + 2λ^3 ξ^2 ∇η| A ∇η·∇η|. Thus, we have J_1 =-s^3 ∬_Q A ∇σ∇(∇σ A ∇σ)|u|^2 dx dy dt =-s^3 ∬_Q ( -λξ A ∇η) ( λ^2ξ^2 ∇(∇η A ∇η) + 2λ^3ξ^2 ∇η| A ∇η·∇η| ^2) |u|^2 dx dy dt =2s^3 λ^4 ∬_Q ξ^3|A ∇η·∇η|^2|u|^2 dx dy dt + s^3 λ^3 ∬_Q ξ^3 A∇η·∇(A∇η·∇η)|u|^2 dx dy dt. Since A∇η·∇(A∇η·∇η) is bounded in Ω, we can deduce that s^3 λ^3 ∫_0^T∫_ω_0ξ^3 A∇η·∇(A∇η·∇η)|u|^2 dx dy dt ≥ -C s^3 λ^3 ∫_0^T∫_ω_0ξ^3 |u|^2 dx dy dt. Considering the inequality |A ∇η·∇η|≥ C>0 in Ω\ω_0, we can deduce the following: s^3 λ^3 ∫_0^T∫_\ω_0ξ^3 A∇η·∇(A∇η·∇η)|u|^2 dx dy dt ≥ -C s^3 λ^3 ∫_0^T∫_\ω_0ξ^3 |u|^2 dx dy dt ≥ -C s^3 λ^3 ∫_0^T∫_\ω_0ξ^3 |A ∇η·∇η|^2 |u|^2 dx dy dt ≥ -C s^3 λ^3 ∬_Qξ^3 |A ∇η·∇η|^2 |u|^2 dx dy dt. Thus, we can conclude that J_1 ≥ C s^3 λ^4 ∬_Q ξ^3|A ∇η·∇η|^2|u|^2 dx dy dt-C s^3 λ^3 ∫_0^T ∫_ω_0ξ^3|u|^2 dx dy dt, and J_2 = -2s ∬_Q A∇ u ·[ ∂ u/∂ x∇ (y^α_y∂σ/∂ x) + ∂ u/∂ y∇ (x^α_x∂σ/∂ y) ] dx dy dt = -2s ∬_Q( D(A∇σ) A∇ u) ·∇ u dx dy dt = 2s ∬_Q( λ^2 ξ (A∇η·∇ u) A∇η + λξ D(A∇η) A∇ u ) ·∇ u dx dy dt = 2s λ^2∬_Qξ| A∇η·∇ u | ^2 dx dy dt + 2sλ∬_Qξ D(A∇η) A∇ u ·∇ u dx dy dt ≥ Cs λ^2∬_Qξ| A∇η·∇ u | ^2 dx dy dt - C sλ∬_Qξ A∇ u ·∇ u dx dy dt. From the definition of ξ, it is evident that ξξ_t ≤ξ^3. Therefore, we can write J_3 =-2 s^2 ∬_Q A∇σ·∇σ_t u^2 dx dydt =-2 s^2 λ^2 ∬_Qξξ_t |A ∇η·∇η| u^2 dx dydt ≥-2 s^2 λ^2 ∬_Qξ^3 |A ∇η·∇η| u^2 dx dydt. Likewise, we obtain J_3 ≥ -C s^2 λ^2 ∬_Qξ^3 |A ∇η·∇η|^2 u^2 dx dydt -C s^2 λ^2 ∫_0^T∫_ω_0ξ^3 |A ∇η·∇η|^2 u^2 dx dydt, and J_4 = s∬_Q y^α_y∂σ/∂ x( α_x x^α_x-1| ∂ u/∂ y| ^2) dx dydt + s∬_Q x^α_x∂σ/∂ y( α_y y^α_y-1| ∂ u/∂ x| ^2) dx dydt ≥ -Cs λ∬_Q α_x ξ y^α_y+2 x^α_x | ∂ u/∂ y|^2 dx dy dt -Cs λ∬_Q α_y ξ x^α_x +2 y^α_y | ∂ u/∂ x|^2 dx dy dt ≥ -C s λ∬_Q ξ A ∇ u ·∇ u dx dy dt. By using the definitions of σ and ξ, we can derive the following expression: J_5 = -s ∬_Q u A ∇ u ·∇((A ∇σ)) dx dy dt = s λ^3 ∬_Q ξ u A ∇ u ·∇η( A ∇η·∇η) dx dy dt + s λ^2 ∬_Q ξ u A ∇ u ·∇(A ∇η·∇η) dx dy dt +s λ^2 ∬_Q ξ u A ∇ u ·∇η(A ∇η) dx dy dt+s λ∬_Q ξ u A ∇ u ·∇((A ∇η)) dx dy dt. Let us denote the seven integrals on the right-hand side of (<ref>) as J_51, ⋯, J_54. Then we have J_51= s λ^3 ∬_Q ξ u A ∇ u ·∇η( A ∇η·∇η) dx dy dt ≥ -Cs^2 λ^4 ∬_Q ξ|A ∇η·∇η|^2|u|^2 dx dy dt -Cλ^2 ∬_Q ξ| A∇ u ·∇η|^2 dx dy dt, and J_52= s λ^2 ∬_Q ξ u A ∇ u ·∇(A ∇η·∇η) dx dy dt ≥ -Cs^2 λ^3∬_Qξ| A ∇η·∇η| u^2 dx dy dt -Cs^2 λ^3∫_0^T ∫_ω_0ξ| A ∇ u ·∇ u | u^2 dx dy dt - Cλ∬_Q ξ A ∇ u ·∇ u dx dy dt. Then we have J_53= s λ^2 ∬_Q ξ u A ∇ u ·∇η(A ∇η) dx dy dt ≥ -Cs^2 λ^3∬_Q ξ| A ∇η·∇η|^2 u^2 dx dy dt -Cs^2 λ^3∫_0^T ∫_ω_0ξ u^2 dx dy dt - Cλ∬_Q ξ A ∇ u ·∇ u dx dy dt, and J_54= s λ∬_Q ξ u A ∇ u ∇((A ∇η))dx dy d t ≥ -C s^2 λ∬_Q ξ|A ∇η·∇η|^2 u^2dx dy d t-C s^2 λ∫_0^T ∫_ω_0ξ u^2dx dy d t -C λ∬_Q ξ A ∇ u ·∇ udx dy d t. Combining (<ref>) to (<ref>), we obtain J_5 ≥ -Cs^2 λ^4 ∬_Q ξ|A ∇η·∇η|^2 |u|^2dx dy d t -Cλ^2∬_Qξ| A∇ u·∇η| ^2 dx dy dt -C s^2 λ^3 ∫_0^T ∫_ω_0ξ|u|^2dx dy d t- Cλ∬_Q ξ A ∇ u ·∇ u dx dy d t. Finally, as can be observed from the definitions of ξ and σ, we have σ_tt≤ξ^3/2. Hence, J_6 =-s/2∬_Q σ_t t|u|^2dx dy d t ≥-C s ∬_Q ξ^3 / 2|u|^2dx dy d t. From (<ref>)-(<ref>) and (<ref>)-(<ref>), we deduce that (P_1 u, P_2 u) ≥ C∬_Q s^3 λ^4 ξ^3 |∇η· A ∇η|^2 |u|^2 dx dydt + C s λ^2 ∬_Qξ|∇ u · A ∇η|^2 dx dy d t -Cs^3 λ^3 ∫_0^T ∫_ω_0ξ^3|u|^2dx dy d t - Cs λ∬_Q ξ A ∇ u ·∇ udx dy d t - Cs ∬_Q ξ^3 / 2|u|^2dx dy d t. It is evident that the term s ∬_Q ξ^3 / 2|u|^2dx dy dt can be absorbed by other terms. By combining (<ref>) and (<ref>), we conclude that P_1 u^2+P_2 u^2+C∬_Q s^3 λ^4 ξ^3|A ∇η·∇η|^2|u|^2 dx dydt + C s λ^2 ∬_Qξ|∇ u · A ∇η|^2 dx dy d t ≤ Ce^-s σ f^2 +Cs λ∬_Q ξ A ∇ u ·∇ udx dy d t+Cs^3 λ^3 ∫_0^T ∫_ω_0ξ^3|u|^2dx dy d t. Moreover, we can deduce that s^2 λ^2 ∬_Q ξ^2|u|^2dx dy d t=s^2 λ^2 ∫_0^T ∫_\ω_0ξ^2|u|^2dx dy d t + s^2 λ^2 ∫_0^T ∫_ω_0ξ^2|u|^2dx dy d t. Clearly, the second term on the right can be absorbed by the other terms in (<ref>). Let us consider the first term on the right, we have s^2 λ^2 ∫_0^T ∫_\ω_0ξ^2|u|^2dx dy d t ≤ Cs^2 λ^2 ∫_0^T ∫_\ω_0ξ^2|A ∇η·∇η|^2|u|^2dx dy d t ≤ Cs^2 λ^2 ∬_Q ξ^2|A ∇η·∇η|^2|u|^2dx dy d t. Then we obtain P_1 u^2+P_2 u^2+C s^2 λ^2 ∫_0^T ∫_\ω_0ξ^2|u|^2dx dy d t + C∬_Q s^3 λ^4 ξ^3|A ∇η·∇η|^2|u|^2 dx dydt + C s λ^2 ∬_Qξ|∇ u · A ∇η|^2 dx dy d t ≤ Ce^-s σ f^2 +Cs λ∬_Q ξ A ∇ u ·∇ udx dy d t+Cs^3 λ^3 ∫_0^T ∫_ω_0ξ^3|u|^2dx dy d t. Now, using the definitions of P_1 u and P_2 u, we observe that s^-1∬_Q ξ^-1|u_t|^2dx dy d t = s^-1∬_Q ξ^-1(P_1 u -s u (A∇σ)- 2s ∇ u · A∇σ)^2dx dy d t ≤ Cs^-1P_1 u^2+s ∬_Q ξ^-1|u|^2|(A ∇σ)|^2dx dy d t +C s ∬_Q ξ^-1|∇ u A ∇σ|^2dx dy d t ≤ Cs^-1P_1 u^2+C s λ^4 ∬_Q ξ| A ∇η·∇η|^2 |u|^2dx dy d t +C s λ^2 ∬_Q ξ|u|^2dx dy d t+C s λ^2 ∬_Q ξ|∇ u · A ∇η|^2dx dy d t, and s^-1∬_Q ξ^-1|(A∇ u)|^2dx dy d t = s^-1∬_Q ξ^-1(P_2 u -s^2 u ∇σ· A ∇σ - s σ_t u)^2dx dy d t ≤ C s^-1P_2 u^2+s^3 ∬_Q ξ^-1|u|^2|∇σ· A ∇σ|^2dx dy d t +C s ∬_Q ξ^-1|σ_t|^2|u|^2dx dy d t ≤ C s^-1P_2 u^2+C s^3 λ^4 ∬_Q ξ^3|∇η· A ∇η|^2|u|^2dx dy d t +C s ∬_Q ξ^2|u|^2dx dy d t. From (<ref>), we obtain s^-1∬_Qξ^-1 (|u_t |^2 + |(A∇ u)|^2) dx dydt +C s^2 λ^2 ∫_0^T ∫_\ω_0ξ^2|u|^2dx dy d t +C∬_Q s^3 λ^4 ξ^3|A ∇η·∇η|^2|u|^2 dx dydt + C s λ^2 ∬_Qξ|∇ u · A ∇η|^2 dx dy d t ≤ Ce^-s σ f^2 +Cs λ∬_Q ξ A ∇ u ·∇ udx dy d t+Cs^3 λ^3 ∫_0^T ∫_ω_0ξ^3|u|^2dx dy d t. Since s λ∬_Q ξ A ∇ u ·∇ udx dy d t= s λ∬_Σξ u A ∇ u ·ν d s d t - s λ∬_Q (ξ A∇ u) udx dy d t = - s λ∬_Q (ξ A∇ u) udx dy d t, and then, - s λ∬_Q (ξ A∇ u) udx dy d t = - s λ^2 ∬_Q ξ u∇η· A∇ u dx dy d t - s λ∬_Q ξ( A∇ u) udx dy d t ≤ C λ^2 ∬_Q ξ| ∇η· A∇ u | ^2dx dy d t + Cs^2 λ^2 ∬_Q ξ u^2dx dy d t +C s^-1λ^-1∬_Q ξ^-1| ( A∇ u)| ^2dx dy d t + Cs^3 λ^3 ∬_Q ξ^3 u^2dx dy d t, thus, s λ∬_Q ξ A ∇ u ·∇ udx dy dt can be absorbed by other terms. Therefore, we have s^-1∬_Qξ^-1 (|u_t |^2 + |(A∇ u)|^2) dx dydt +C s^2 λ^2 ∫_0^T ∫_\ω_0ξ^2|u|^2dx dy d t +C∬_Q s^3 λ^4 ξ^3|A ∇η·∇η|^2|u|^2 dx dydt + C s λ^2 ∬_Qξ|∇ u · A ∇η|^2 dx dy d t ≤ Ce^-s σ f^2 +Cs^3 λ^3 ∫_0^T ∫_ω_0ξ^3|u|^2dx dy d t. Using classical arguments, we can transform back to the original variable w and conclude the result. Acknowledgement This work is supported by the National Natural Science Foundation of China, the Science-Technology Foundation of Hunan Province. abbrvnat
http://arxiv.org/abs/2307.02763v1
20230706040605
Your spouse needs professional help: Determining the Contextual Appropriateness of Messages through Modeling Social Relationships
[ "David Jurgens", "Agrima Seth", "Jackson Sargent", "Athena Aghighi", "Michael Geraci" ]
cs.CL
[ "cs.CL", "cs.CY" ]
SeLiNet: Sentiment enriched Lightweight Network for Emotion Recognition in Images Tuneer Khargonkar1, Shwetank Choudhary2, Sumit Kumar3, Barath Raj KR4 Samsung R&D Institute, Bangalore, India Email: {1t.khargonkar, 2sj.choudhary, 3sumit.kr, 4barathraj.kr}@samsung.com August 1, 2023 =================================================================================================================================================================================================== Understanding interpersonal communication requires, in part, understanding the social context and norms in which a message is said. However, current methods for identifying offensive content in such communication largely operate independent of context, with only a few approaches considering community norms or prior conversation as context. Here, we introduce a new approach to identifying inappropriate communication by explicitly modeling the social relationship between the individuals. We introduce a new dataset of contextually-situated judgments of appropriateness and show that large language models can readily incorporate relationship information to accurately identify appropriateness in a given context. Using data from online conversations and movie dialogues, we provide insight into how the relationships themselves function as implicit norms and quantify the degree to which context-sensitivity is needed in different conversation settings. Further, we also demonstrate that contextual-appropriateness judgments are predictive of other social factors expressed in language such as condescension and politeness. § INTRODUCTION These authors contributed equally to this work ⋆These authors contributed equally to this work footnote Interpersonal communication relies on shared expectations of the norms of communication <cit.>. Some of these norms are widely shared across social contexts, e.g., racial epithets are taboo, enabling NLP models to readily identify certain forms of offensive language <cit.>. Yet, not all norms are widely shared; the same message said in two different social contexts may have different levels of acceptability (Figure <ref>). While NLP has recognized the role of social context as important <cit.>, few works have directly incorporated this context into modeling whether messages violate social norms. Here, we explicitly model relationships as the social context in which a message is said in order to assess whether the message is appropriate. NLP models have grown more sophisticated in modeling the social norms needed to identify offensive content. Prior work has shown the benefits of modeling context <cit.>, such as the demographics of annotators and readers <cit.> and the online community in which a message is said <cit.>. However, these works overlook normative expectations within people's relationships. In this paper, we introduce a new dataset of over 12,236 instances labeled for whether the message was appropriate in a given relationship context. Using this data, we show that computation models can accurately identify the contextual appropriateness of a message, with the best-performing model attaining a 0.70 Binary F1. Analyzing the judgments of this classifier reveals the structure of the shared norms between relationships. Through examining a large corpus of relationship-labeled conversations, we find that roughly 19% of appropriate messages could be perceived as inappropriate in another context, highlighting the need for models that explicitly incorporate relationships. Finally, we show that our model's relationship-appropriate judgments provide useful features for identifying subtly offensive language, such as condescension. § SOCIAL NORMS OF APPROPRIATENESS Relationships are the foundation of society: most human behaviors and interactions happen within the context of interpersonal relationships <cit.>. Communication norms vary widely across relationships, based on the speakers' social distance, status/power, solidarity, and perceived mutual benefit <cit.>. These norms influence communication in content, grammar, framing, and style <cit.> and help reinforce (or subvert) the relationship between speakers <cit.>. Prior computational work mostly frames appropriateness as exhibiting positive affect and overlooks the fact that, in some relationships, conversations can be affectively negative but still appropriate <cit.>. For example, swearing is often considered a norm violation <cit.>, but can also be viewed as a signal of solidarity between close friends <cit.> or co-workers <cit.>. In such cases, the violation of taboo reinforces social ties by forming a sense of in-group membership where norms allow such messages <cit.>. In sociolinguistics, appropriateness is a function of both context and speech. <cit.> argues that “different situations, different topics, different genres require different linguistic styles and registers,” and <cit.> argues that the extent to which “something is suitable, effective or liked in some context” determines its appropriateness. Whether a discourse is appropriate depends strongly on the social context in which it is produced and received <cit.>, making the assessment of appropriateness a challenging task due to the need to explicitly model contextual norms. Behavioral choices are subject to the norms of “oughtness” <cit.>, and <cit.> suggest relationship types as an important factor influencing the normative expectations for relational communication. For example, while it may be considered appropriate for siblings to discuss their past romantic relationships in detail, the topic is likely to be perceived as taboo or inappropriate between romantic partners <cit.>. § BUILDING A DATASET OF CONTEXTUAL APPROPRIATENESS Prior work has shown that interpersonal relationships are a relevant context for the appropriateness of content <cit.>. While not all messages differ in this judgment—e.g., “hello” may be appropriate in nearly all settings—building a dataset that embodies this context sensitivity remains a challenge. Here, we describe our effort to build a new, large dataset of messages rated for contextual appropriateness, including how we select relationships and operationalize appropriateness. Due to the challenge of identifying and rating these messages, our dataset is built in two phases. Selecting Relationships Formally categorizing relationships has long been a challenging task for scholars <cit.>. We initially developed a broad list of relationships, drawing from 1) folk taxonomies <cit.>, e.g., common relationship types of friends <cit.>, family <cit.>, or romantic partners <cit.>; and 2) organizational and social roles <cit.>, e.g., those in a workplace, classroom, or functional settings, as these frequently indicate different social status, distance, or solidarity between individuals in the relationship. Using this preliminary list, four annotators performed a pilot assessment of coverage by discussing quotes from movie scripts, social media, or their imagination and identifying cases where an excluded relationship would have a different judgment for appropriateness. Ultimately, 49 types of relationships were included, shown in Table <ref>. Defining Appropriateness Appropriateness is a complex construct that loads on many social norms <cit.>. For instance, in some relationships, an individual may freely violate topical taboos, while in other relationships, appropriateness depends on factors like deference due to social status. Informed by the theory of appropriateness <cit.>, we operationalize inappropriate communication as follows: Given two people in a specified relationship and a message that is plausibly said under normal circumstances in this relationship, would the listener feel offended or uncomfortable? We use plausibility to avoid judging appropriateness for messages that would likely never be said, e.g., “would you cook me a hamburger?” would not be said from a doctor to a patient. We constrain the setting to what an annotator would consider normal circumstances for people in such a relationship when deciding whether the message would be perceived as appropriate; for example, having a teacher ask a student to say something offensive would be an abnormal context in which that message is appropriate. Thus, during annotation, annotators were asked to first judge if the message would be plausibly said and only, if so, rate its appropriateness. Judging appropriateness necessarily builds on the experiences and backgrounds of annotators. Culture, age, gender, and many other factors likely influence decisions on the situational appropriateness of specific messages. In making judgments, annotators were asked to use their own views and not to ascribe to a judgment of a specific identity. Raw Data Initial conversational data was selectively sampled from English-language Reddit. Much of Reddit is not conversational in the sense that comments are unlikely to match chit-chat. Further, few comments are likely to be context-sensitive. To address these concerns, we filter Reddit comments in two ways. First, we train a classifier to identify conversational comments, using 70,949 turns from the Empathetic dialogs data <cit.> and 225,907 turns from the Cornell movie dataset <cit.> as positive examples of conversational messages, and 296,854 turns from a random sample of Reddit comments as non-conversational messages. Full details are provided in Appendix <ref>. Second, we apply our conversational classifier to comments marked by Reddit as controversial in the Pushshift data <cit.>; while the decision logic for which comments are marked as controversial is proprietary to Reddit, controversial-labeled comments typically receive high numbers of both upvotes and downvotes by the community—but are not necessarily offensive. These two filters were applied to identify 145,210 total comments gathered from an arbitrary month of data (Feb. 2018). §.§ Annotation Phase 1 In the first phase of annotation, four annotators individually generated English-language messages they found to differ in appropriateness by relationship.[This process was developed after pilot tests showed the random sample approach was unlikely to surface interesting cases, but annotators found it easier to ideate and write their own messages after being exposed to some example communication.] Annotators were provided with a website interface that would randomly sample conversational, controversial Reddit comments as inspiration. Details of the annotation instructions and interface are provided in Appendix <ref>. The annotation process used a small number of in-person annotators rather than crowdsourcing to allow for task refinement: During the initial period of annotating, annotators met regularly to discuss their appropriateness judgments and disagreements. This discussion process was highly beneficial for refining the process for disentangling implausibility from inappropriateness. Once annotation was completed, annotators discussed and adjudicated their ratings for all messages. Annotators ultimately produced 401 messages and 5,029 total appropriateness ratings for those messages in the context of different relationships. §.§ Annotation Phase 2 Phase 2 uses an active learning approach to identify potentially relationship-sensitive messages to annotate from a large unlabeled corpus. A T5 prompt-based classifier was trained using OpenPrompt <cit.> to identify whether a given message would be appropriate to say to a person in a specific relationship. Details of this classifier are provided in Appendix <ref>. This classifier was run on all sampled data to identify instances where at least 30% of relationships were marked as appropriate or inappropriate; this filtering biases the data away from universally-appropriate or inappropriate messages, though annotators may still decide otherwise. Two annotators, one of which was not present in the previous annotation process, completed two rounds of norm-setting and pilot annotations to discuss judgments. Then, annotators rated 30 messages each, marking each for plausibility and, if plausible, appropriateness; they met to adjudicate and then rated another 41 messages. This produced 2,159 appropriateness ratings across these messages. Annotators had a Krippendorff's α of 0.56 on plausibility and, for messages where both rated as plausible, 0.46 on appropriateness. While this agreement initially seems moderate, annotators reviewed all disagreements, many of which were due to different interpretations of the same message, which influenced appropriate judgments rather than disagreements in appropriateness itself. Annotators then revised their own annotations in light of consensus in message meaning, bringing the plausibility agreement to 0.72 and appropriateness to 0.92. We view these numbers as more reliable estimates of the annotation process, recognizing that some messages may have different judgments due to annotators' values and personal experiences. We mark the 2,159 ratings in this data as Adjudicated data for later evaluation. Both annotators then independently annotated different samples of the Reddit data in order to maximize diversity in messages. Annotators were instructed to skip annotating messages that they viewed as less context-sensitive (e.g., offensive in all relationship contexts) or where the message did not appear conversational. Annotators provided 5,408 ratings on this second sample. We refer to this non-adjudicated data as Phase 2 data. §.§ Dataset Summary and Analysis The two phases produced a total of 12,236 appropriateness judgments across 5299 messages. Of these, 7,589 of the judgments were appropriate, and 4647 were inappropriate. Table <ref> shows examples of annotation judgments. In line with prior cultural studies of appropriateness <cit.>, three themes emerged during training. First, annotators noted the perception of the role of teasing in deciding appropriateness. Teasing messages are directed insults (mild or otherwise) aimed at the other party; comments such as “you are so dumb” are likely made in jest within close relationships such as best friends or siblings but inappropriate in many others. Second, messages' appropriateness depended in part on whether the relationship was perceived to be supportive; for example, the message “At least you called him by his correct name” could be one of encouragement in the face of a mistake (e.g., if said by a spouse) or a subtle insult that implies the listener should have known more about the third party. Third, differences in the power/status in the relationship influenced appropriateness, where very direct messages, e.g., “you made a mistake there.” were often perceived to be inappropriate when said to a person of higher status, a known violation of politeness strategies <cit.>. Ultimately, appropriateness was judged through a combination of these aspects. As an initial test of regularity in how the relationship influence perceived appropriateness, we measured the probability that a message appropriate for relationship r_i is also appropriate for r_j using all the annotations, shown in fig:app-heatmap and grouped by thematic categories. Clear block structure exists with some categories, e.g., Organization, indicating shared norms of appropriateness for relationships within the same category. In contrast, the Family and Social categories contain relationships with different power (e.g., parent) and social distance (e.g., friend vs. stranger), leading to varied judgments. Figure <ref> also reveals the asymmetry in which message themes are appropriate: While much of what is said for Role-based relationships is also appropriate in Social or Romance, the reverse is not true. § IDENTIFYING CONTEXTUAL INAPPROPRIATENESS Given the high potential agreement of annotators, we test whether models can similarly recognize whether a given message is appropriate if said in the context of a specific relationship. Experimental Setup Two classes of models were trained. Given the recent successes of prompt-based models, we build models using the OpenPrompt library <cit.> and, to support larger models, using the PEFT library <cit.>. The OpenPrompt library was used to train and models using the prompt “Is it appropriate for person1 to say "quote" to person2, "yes" or "no"? ” using the verbalization “yes' or ”no” for the masked token. Here, we fill in person1 and person2 to refer to the two parties in the relationship. Examples of filled-in templates and other prompts are reported in Appendix <ref>, though performance varied only slightly across different prompts. The PEFT library was used to train the and variants of the model <cit.>. This model has been pretrained for instruction following; therefore, based on suggested guidelines from <cit.>, we use the following prompt: “Rate whether it is inappropriate for this message to be said in the following social setting?\n setting: relationship description \n message: "quote"\n answer (yes or no):” Due to the resources required for training these larger models, no additional prompts were rigorously evaluated outside of initial pilot testing. The second class of models uses masked language model (MLM) fine-tuning on the token from an MLM to predict appropriateness. Here, we frame the instance using the same language as the OpenPrompt-based models but fill in the MASK with “yes” (i.e., indicating that the message is appropriate to say in the relationship). The classification model is then fine-tuned to classify whether this hard-coded judgment is correct or not. We test two recent MLMs, MiniLM <cit.>, a small distilled model, and DeBERTa-v3 <cit.>, a much larger model. These two models reflect extremes among relatively small MLMs and allow us to assess whether more social relationship knowledge might be embedded within a larger parameter space. Annotated data was split at the message level 70:10:20 into train, development, and test sets, resulting in 9,107 train, 1,100 development, and 2,029 test instances. We frame the task similar to offensive language detection and use Binary F1 as our metric where inappropriate is the positive class. Model performance is reported as the average across five random runs. Additional training details and per-seed performance are provided for all systems in Appendix <ref>. Two baseline systems are included. The first is random labels with respect to the empirical distribution in the training data. The second uses Perspective API <cit.> to rate the toxicity of the message, labeling it as toxic if the rating is above 0.7 on a scale of [0,1]; the same label is used for all relationships. While this baseline is unlikely to perform well, it serves as a reference to how much explicit toxicity is in the dataset, as some (though not all) of these messages are inappropriate to all relationships. Results Models accurately recognized how relationships influence the acceptability of a message, as seen in tab:model-performance. Prompt-based models were largely equivalent to MLM-based models, though both approaches far exceeded the baselines. The largest model, , ultimately performed best, though even the MiniLM offered promising performance, despite having several orders of magnitude fewer parameters. In general, models were more likely to label messages as inappropriate even when appropriate for a particular setting (more false positives). This performance may be more useful in settings where a model flags potentially inappropriate messages which are then reviewed by a human (e.g., content moderation). However, the performance for models as a whole suggests there is substantial room for improvement in how relationships as social context are integrated into the model's decisions. Error Analysis Different relationships can have very different norms in terms of what content is acceptable, as highlighted in fig:app-heatmap. How did model performance vary by relationship? Figure <ref> shows the binary F1 score of the model by relationship, relative to the percent of training instances the model saw that were inappropriate; Appendix Table <ref> shows full results per relationship. Model performance was highly correlated with the data bias for inappropriateness (r=0.69; p<0.01). The model had trouble identifying inappropriate comments for relationships where most messages are appropriate (e.g., friend, sibling) in contrast to more content-constrained relationships (boss, student, doctor). These low-performance relationships frequently come with complex social norms—e.g., the boundary between appropriate teasing and inappropriate hurtful comments for siblings <cit.>—and although such relationships have among the most training data, we speculate that additional training data is needed to model these norms, especially given the topical diversity in these relationships' conversations. § GENERALIZING TO UNSEEN RELATIONSHIPS Through their pretraining, LLMs have learned semantic representations of relationships as tokens. Our classification experiments show that LLMs can interpret these relationship-as-token representations to effectively judge whether a message is appropriate. To what extent do these representations allow the model to generalize about new relationships not seen in training? In particular, are models able to generalize if a category of relationship, e.g., all family relations, was never seen? Here, we conduct an ablation study where one of our folk categories is held out during training. Setup The model is trained with the same hyperparameters as the best-performing system on the full training data. We use the same data splits, holding out all training examples of relationships in one category during training. We report the Binary F1 from the test set on (1) relationships seen in training and (2) relationships in the held-out category. Note that because training set sizes may change substantially due to an imbalance of which relationships were annotated and because categories have related norms of acceptability, performance on seen-in-training is likely to differ from the full data. Results Ablated models varied substantially in their abilities to generalize to the unseen relationship types, as well as in their baseline performance (Figure <ref>). First, when ablating the larger categories of common relationships (e.g., Family, Social), the model performs well on seen-relationships, dropping performance only slightly, but is unable to accurately generalize to relationships in the unseen category. These unseen categories contain relationships that span a diverse range of norms with respect to power differences, social distance, and solidarity. While other categories contain partially-analogous relationships along these axes, e.g., parent-child and teacher-student both share a power difference, the drop in performance on held-out categories suggests the model is not representing these social norms in a way that allows easy transfer to predicting appropriateness for unseen relationships with similar norms. Second, relationships in three categories improve in performance when unseen: Organizational, Role-Based, and Parasocial. All three categories feature relationships that are more topically constrained around particular situations and settings. While the categories do contain nuance, e.g., the appropriateness around the power dynamics of boss-employee, the results suggest that models may do well in zero-shot settings where there is strong topic-relationship affinity—and messages outside of normal topics are inappropriate. Viewing these two trends together, we posit that the semantic representations of relationships in currently capture only minimal kinds of social norms—particularly those relating to topic—and these norms are not represented in a way that lets the model easily generalize to reasoning about relationships not seen in training. § HOW MUCH OF CONVERSATION IS CONTEXT SENSITIVE IN APPROPRIATENESS? Our annotation and computational models have shown that the relationship context matters in determining appropriateness. However, it is unclear how often conversations are sensitive to this context. For example, the majority of conversation may be appropriate to all relationships. Here, we aim to estimate this context sensitivity by testing the appropriateness of a message in counterfactual settings using an existing dataset labeled with relationship types. Experimental Setup To estimate context sensitivity, we use our most accurate model to label a large selection of dialog turns from the PRIDE dataset <cit.>. PRIDE consists of 64,844 dialog turns from movie scripts, each annotated for the relationship between the speaker and receiver, making it ideal as a high-plausibility conversational message said in relationships. However, some turns of the dialog are explicitly grounded in the setting of the movie, e.g., “How's it going, Pat?” which makes the turn too specific to that particular setting to accurately estimate appropriateness. Therefore, we run SpaCy NER <cit.> on the dialog and remove all turns containing references to people, companies, countries, and nationalities in order to keep the dialog generic and maximally plausible in many different relationship contexts. Further, we remove turns with only a single token or over 100 tokens. This filtering leaves 47,801 messages for analysis. PRIDE contains 18 unique relationships, 16 of which were already included in our categories (cf. tab:relationships); the two previously-unseen relationship types, described as “religious relationships” and “client/seller (commercial),” were also included since our model can accommodate zero-shot prediction.[These relationships were phrased as “from a person to someone in their church” and “from a person to a commercial associate” in our prompt model testing.] To text for context sensitivity, we apply our model and measure the appropriateness of the actual relationship context and then the counterfactual cases as if the message had been said in an alternative relationship context seen in their data. This setup allows us to assess whether if a message was appropriate in its intended relationship context, would it still be appropriate in another. Results Considering only appropriate messages and excluding the unusual enemy relationship from consideration, we find that roughly 19% of the appropriate-as-said messages in the data would be inappropriate if said in the context of a different relationship. Figure <ref> shows the probability that a message acceptable in some other relationship context would also be acceptable in the given context; the striking decrease in the likelihood of acceptability follows the increasingly constrained social norms around a relationship. For example, while friends and loved ones have broad latitude to discuss sensitive topics <cit.>, Role-based relationships and those with larger power differences are more constrained in what is considered acceptable conversation. While the movie dialog in the PRIDE dataset likely differs from a natural dialog, these results point to relationships as important contexts in natural language understanding. More generally, we suggest a need for socially-aware models to identify offensive language. While substantial effort has been put into identifying explicit toxic or abusive language <cit.>, few models, if any, incorporate the context in which the message is said. These models typically rely on previous conversation turns <cit.> or modeling community-level social norms <cit.> to understand how the context may shift whether the message is perceived as appropriate. Our result suggests that the social context—and particularly social relationships—are highly influential in measuring appropriateness. Indeed, together with the result showing the (expected) low performance of the Perspective API toxicity detector, these results suggest NLP models deployed in social settings are likely missing identifying many offensive messages due to their lack of explicitly modeling of social relations. As NLP tools make their way into the workplace setting, which frequently features a mix of Organizational, Social, and Romance ties, explicitly modeling context will likely be necessary. § IDENTIFYING SUBTLE OFFENSIVENESS USING CONTEXTUAL APPROPRIATENESS Prior NLP studies of subtly inappropriate language often omit the social context in which a statement is said <cit.>, yet it is often this context that makes a statement inappropriate. For example, a teacher asking a student “Do you need help writing that?” is appropriate, whereas a student asking a teacher the same question may seem rude. We hypothesize that modeling the relative appropriateness of a message across relationships can help identify types of subtly offensive language. We test this hypothesis using datasets for two phenomena: condescension <cit.> and (im)politeness <cit.>. Experimental Setup The model is used to predict the appropriateness of each message in the training data in the TalkDown dataset for condescension <cit.>, and the Stanford Politeness Corpus <cit.>. Each message is represented as a binary vector of inappropriateness judgments for each relationship. TalkDown is based on Reddit comments, which our model has seen, whereas the politeness data is drawn from Wikipedia and StackExchange conversations. We adopt the same train and test splits as in the respective papers and fit a logistic regression classifier for each dataset to predict whether a message is condescending or impolite, respectively, from the per-relationship appropriateness vector. The logistic regression model uses Scikit-learn <cit.>; for each task, we adopt the evaluation metric used in the respective paper. Appendix <ref> has additional details. Results The relationship appropriateness scores were meaningfully predictive of subtle offensiveness, as seen in tab:talkdown for condescension and tab:politeness for impoliteness. In both settings, the appropriateness features provide a statistically significant improvement over random performance, indicating that adding relationships as context can help identify subtly offensive messages. Further, despite the classifier's relative simplicity, the appropriateness features alone outperform the classifier used in <cit.> in the balanced setting, underscoring how explicitly modeling relationships can still be competitive with LLM-based approaches. Performance at recognizing (im)politeness from relationship-appropriateness was lower than the hand-crafted or purely bag-of-words approaches. Yet, this gap is expected given that dataset's design; <cit.> focus on identifying discourse moves, and the politeness classification task comes from messages at the top and bottom quartiles of their politeness rating. Messages in the bottom quartile may be less polite, rather than impolite, and therefore appropriate in more context, thereby making relationship-appropriate judgments less discriminating as features. § CONCLUSION “Looking beautiful today!”, “You look like you need a hand with that”, and “When can I see you again?”—in the right contexts, such messages can bring a smile, but in other contexts, such messages are likely to be viewed as inappropriate. In this paper, we aim to detect such inappropriate messages by explicitly modeling the relationship between people as a social context. Through a large-scale annotation, we introduce a new dataset of over 12,236 ratings of appropriateness for 49 relationships. In experiments, we show that models can accurately identify inappropriateness by making use of pre-trained representations of relationships. Further, through counterfactual analysis, we find a substantial minority of content is contextually-sensitive: roughly 19% of the appropriate messages we analyzed would not be appropriate if said in some other relationship context. Our work points to a growing need to consider meaning within the social context, particularly for identifying subtly offensive messages. All data and code are released at <https://github.com/davidjurgens/contextual-appropriateness>. § ACKNOWLEDGMENTS The authors thank Aparna Anathasubramaniam, Minje Choi, and Jiaxin Pei for their timely and valuable feedback on the paper. This work was supported by the National Science Foundation under Grant Nos. IIS-1850221, IIS-2007251 and IIS-2143529. § LIMITATIONS This paper has three main limitations worth noting. First and foremost, while our paper aims to model the social context in which a message is said, the current context is limited to only the parties' relationship. In practice, the social context encompasses a wide variety of other factors, such as the sociodemographics of the parties, the culture and setting of the conversation, and the history of the parties. Even relationships themselves are often much more nuanced and the appropriateness may vary widely based on setting, e.g., statements said between spouses may vary in appropriateness when made in public versus private settings. These contextual factors are likely necessary for a full account of the effect of social context on how messages should be perceived. Our work provides an initial step in this direction by making the relationship explicit, but more work remains to be done. Future work may examine how to incorporate these aspects, such as by directly inputting the situation's social network as context using graph embedding techniques <cit.>, where the network is labeled with relationships <cit.>, or by modeling relationships particular types of settings such as in-person, phone, texting, or other online communication, which each have different norms. Second, our data includes annotations on a finite set of relationships, while many more unique relationships are possible in practice, e.g., customer or pastor. Our initial set was developed based on discussions among annotators and aimed at high but not complete coverage due to the increasing complexity of the annotation task as more relationships were added. Our results in Section <ref> suggest that our best model could be able to generalize to new types of relationships in some settings and zero-shot results on two new relationship types not seen in training (a fellow church member and a commercial relationship) match expectations of context sensitivity, (cf. Figure <ref> . However, performance is likely limited for less-common relationships without additional training data to describe the norms of appropriateness in this context; and, based on the error analysis in Section <ref>, models are currently unlikely to generalize to unseen relationships that have complex sensitivity norms. In addition, new settings such as online spaces may require additional definitions of relationships as individuals interact with each other anonymously. Third, our judgments of appropriateness were drawn from five annotators total, each of whom had different views of appropriateness based on their values and life experience. While our analysis of agreement with the Adjudicated data (Section <ref>) suggests that when annotators can reach a consensus on a message's meaning, they are highly likely to agree on appropriateness, we nonetheless view that our annotations are likely to primarily reflect the values of the annotators and may not generalize to other social or cultural contexts where the norms of relationships differ. Future work is needed to explore how these norms differ through additional annotation, and we hope that our dataset will provide a reference for comparison to these judgments. For example, future work may make use of annotation schemes that explicitly model disagreements <cit.> or personalized judgments <cit.>; such approaches may be able to better represent common factors influencing appropriateness judgments. § ETHICAL CONSIDERATIONS We note three points on ethics. First, we recognize that appropriateness is a value judgment, and therefore our data is limited here by the viewpoints of the annotators. Multiple works on offensive language have shown that the values and identities of annotators can bias the judgments and potentially further marginalize communities of practice whose views and norms are not present <cit.>. We have attempted to mitigate this risk by adding diversity to our annotator pool with respect to gender, age, and culture, yet our limited pool size necessitates that not all viewpoints will be present. Given that we show relationships do matter in judging appropriateness, we hope that future work will add diversity through new additions and data to study relationships. We will also release demographic information on annotators as a part of our dataset to help make potential biases more explicit and more easily addressed. The annotators themselves were authors of the study and were compensated as a part of their normal work with a living wage. Due to the nature of our filtering, the vast majority of our content was not explicitly toxic. Nonetheless, some comments did contain objectionable messages, and annotators were provided guidance on how to seek self-care if the messages created distress. With any new tool to identify offensive or abusive language comes a dual use by an adversarial actor to exploit that tool to find new ways to harass or abuse others while still “abiding by the rules.” Our work has shown that relationships are effective context (and features) for identifying previously-unrecognized inappropriateness. This new capability has the benefit of potentially recognizing more inappropriate messages before they reach their destination. However, some adversaries could still use our data and model to screen their own messages to find those that still are classified as appropriate (while being inappropriate in practice) to evade detection. Nevertheless, given the new ability to identify context-sensitive offensive messages—which we show can represent a substantial percentage of conversation (Section <ref>)—we view the benefits as outweighing the risk. acl_natbib § ANNOTATION DETAILS This section describes the details of the annotation process. Annotators were the authors of this paper and were compensated for their work as a part of their normal duties; no additional payments were provided. The annotation interface was designed using Potato <cit.>, shown in Figure <ref>, and was accessed through a browser, which allowed annotators to start and stop their labeling at any time. Annotators were allowed to revise their annotations at any time. During annotation, annotators were presented with the message to be annotated and collapsible instructions for annotation. Figure <ref> shows the full written instructions shown to annotators. The instructions were refined through an iterative process throughout the project, and annotators regularly communicated about ambiguity. The instructions were designed to let the annotators know the intent of the study and the downstream tasks that data would be used for. § CONVERSATION CLASSIFIER DETAILS The conversational classifier was used during the initial data sampling phase to identify comments on Reddit that could plausibly have been said in a conversation. This classifier is intended only as a filter to improve data quality by reducing the number of non-conversation comments (e.g., those with Reddit formatting, long monologues, and comments written in a non-conversational register). We have two datasets of known conversations: 70,949 turns from the Empathetic dialogs data <cit.> and 225,907 turns from the Cornell movie dataset <cit.> as positive examples of conversational messages. We then sample an equivalent number of 296,854 turns from a random sample of Reddit comments as non-conversational messages. While some of these Reddit messages are likely conversational, this classification scheme is only a heuristic aimed at helping filter data. A held-out set of 74,212 instances was used for evaluation, balanced between conversational and not. A MiniLM classifier <cit.> was trained using Huggingface Transformers <cit.> for five epochs, keeping the model with the lowest training loss at any epoch; Epoch 5 was selected. The model attained an F1 of 0.94 for the held-out data indicating it was accurate at distinguishing the conversational turns from the random sample of Reddit comments. We apply this classifier to 1,917,346 comments from Reddit during the month of February 2018 and identify 145,210 whose probability of being a conversation is >0.5. We retain these comments as potential comments to annotate in Phase 2 (Section <ref>). §.§ Computational resources All of our experiments were conducted on an Ubuntu 16.04.7 LTS machine installed with NVIDIA RTX A5000 and RTX A6000 GPUs having CUDA 11.3. The Python packages used in our experiments include Pytorch 1.17.0, Transformers 4.25.1, PEFT 0.3.0, OpenPrompt 1.0.1, pandas 1.1.4, spacy 3.3.2, and Sci-kit learn 1.2.0. §.§ Specification of LLMs The LLMs used in this paper were downloaded from https://huggingface.co/huggingface.co. The model and their parameter sizes are listed in Table <ref>. §.§ Classifiers from Sklearn For the classification of politeness and condescension tasks, we used logistic regression from sklearn with the solver as `lbfgs' and max_iter set to 400. § PHASE 1 CLASSIFIER The phase-1 LLM classifier was trained using the pilot training data and the OpenPrompt framework. In this framework, we use a batch size of 4, the maximum sequence length was set to 256, decoder_max_length=3, truncate_method="head", and teacher_forcing and predict_eos_token were set to default values. The prompt used for the model was framed as a yes/no question - "is it appropriate for PERSON1 to say QUOTE to PERSON2?". § ADDITIONAL PROMPT-BASED MODEL DETAILS We train and using the OpenPrompt framework. In this framework, we use a batch size of 16, the maximum sequence length was set to 256, decoder_max_length=3, truncate_method="head", and teacher_forcing and predict_eos_token were set to default values. The model was trained using early stopping and the AdamW optimizer with a learning rate set to 1e-4. The different prompts that we used before finalizing the prompt as "Is it appropriate for PERSON1 to say "QUOTE" to PERSON2?, "yes" or "no"? are reported in table <ref>. We train the and models using the PEFT library. Models were trained with a batch size of 96 and 32, respectively. Both models used a maximum sequence length of 192 and learning rate of 1e-2 with AdamW, using all other default library parameters. The model was trained for 20 epochs, keeping the best-performing model by binary F1 on the development dataset for each seed. § ADDITIONAL RESULTS §.§ Development Set Performance The performance of the different models on the development dataset is reported in Table <ref> and performance on the test set with standard errors is reported in Table <ref>. §.§ Analysis of Relationship Predictions The data annotation process showed clear associations between pairs of relationships in terms of how often a message would be appropriate (Figure <ref>). However, the training data for that figure only includes annotations on relationships annotators selected. What structure or regularity might we see from analyzing similarities between all our relationships through model predictions? As a qualitative experiment, we use the model to label the subset of the PRIDE dataset (Section <ref>) for the appropriateness of all 49 relationships in our training data. This produces a binary matrix of 49 × 47,801. We use PCA to capture regularity and then project relationships onto a 2D visualization using t-SNE <cit.>, which is aimed at preserving local similarity in the spatial arrangement. If model predictions are capturing shared norms, we view t-SNE as potentially more useful than a PCA projection, as we want to visualize which relationships with similar judgments as being nearby (what t-SNE does) rather than optimizing the visualization to the global structure of distances (what PCA does). The t-SNE projection was designed using guidance from <cit.>; a perplexity of 40 was used. The resulting visualization, shown in fig:tsne, captures expected regularity. While the projection is only a visual tool, and aspects such as distance are not meaningful in t-SNE visualizations, the grouping and neighbors suggest the model is sensitive to power/status and social distance in how it decides appropriateness based on the relationship. §.§ Per Relationship Results Table <ref> shows the peformance of the model on the test set, broken down by relationship § ADDITIONAL EXPERIMENTAL SETUP DETAILS FOR IDENTIFYING SUBTLY OFFENSIVE MESSAGES For experiments with both the TalkDown corpus <cit.> and Stanford Politeness Corpus <cit.>, the trained model was used in a zero-shot setting with no additional training. For the Politeness corpus, politeness ratings are made at the utterance level, outside of any dialog context. As a result, the existing prompt was used (sec:identifying, Experimental Setup) to assess relationship-specific appropriateness. Two modifications were necessary for the TalkDown corpus. First, the TalkDown corpus's data is rated at the turn level, with condescension judgments based on the interpretation of a reply to a specific piece of quoted text. <cit.> note that incorporating both the quote and reply into the input resulted in better performance. Therefore, we modify our initial prompt slightly as follows: “Rate whether it is inappropriate for message A to be said in response to the message B in the specified social setting: \n A: quoted text \n B: reply text \n setting: relationship description \n answer (yes or no):”. Since the model was trained specifically for instruction following <cit.>, we expected the model to generate similar outputs as our original prompt. Second, some of the quoted and reply text in TalkDown can be quite long (hundreds of words). Since the adapted prompt contains both quote and reply, we use an flexible truncation process to maximize the content that can still fit within the maximum input token sequence length (196). First, quoted text over 50 tokens is truncated to the first 50, using the tokenizer to segment words. Then, if the full input (with prompt instructions) still exceeds the maximum input length, we truncate both the quoted text and reply evenly, still keeping at least the first then 10 tokens of each.
http://arxiv.org/abs/2307.00395v1
20230701174912
MobileViG: Graph-Based Sparse Attention for Mobile Vision Applications
[ "Mustafa Munir", "William Avery", "Radu Marculescu" ]
cs.CV
[ "cs.CV", "cs.LG" ]
MobileViG: Graph-Based Sparse Attention for Mobile Vision Applications Mustafa Munir* The University of Texas at Austin [email protected] William Avery* The University of Texas at Austin [email protected] Radu Marculescu The University of Texas at Austin [email protected] ============================================================================================================================================================================================================================ *Equal contribution Traditionally, convolutional neural networks (CNN) and vision transformers (ViT) have dominated computer vision. However, recently proposed vision graph neural networks (ViG) provide a new avenue for exploration. Unfortunately, for mobile applications, ViGs are computationally expensive due to the overhead of representing images as graph structures. In this work, we propose a new graph-based sparse attention mechanism, Sparse Vision Graph Attention (SVGA), that is designed for ViGs running on mobile devices. Additionally, we propose the first hybrid CNN-GNN architecture for vision tasks on mobile devices, MobileViG, which uses SVGA. Extensive experiments show that MobileViG beats existing ViG models and existing mobile CNN and ViT architectures in terms of accuracy and/or speed on image classification, object detection, and instance segmentation tasks. Our fastest model, MobileViG-Ti, achieves 75.7% top-1 accuracy on ImageNet-1K with 0.78 ms inference latency on iPhone 13 Mini NPU (compiled with CoreML), which is faster than MobileNetV2x1.4 (1.02 ms, 74.7% top-1) and MobileNetV2x1.0 (0.81 ms, 71.8% top-1). Our largest model, MobileViG-B obtains 82.6% top-1 accuracy with only 2.30 ms latency, which is faster and more accurate than the similarly sized EfficientFormer-L3 model (2.77 ms, 82.4%). Our work proves that well designed hybrid CNN-GNN architectures can be a new avenue of exploration for designing models that are extremely fast and accurate on mobile devices. Our code is publicly available at <https://github.com/SLDGroup/MobileViG>. § INTRODUCTION Artificial intelligence (AI) and machine learning (ML) have had explosive growth in the past decade. In computer vision, the key driver behind this growth has been the re-emergence of neural networks, especially convolutional neural networks (CNNs) and more recently vision transformers <cit.>. Even though CNNs trained via back-propagation were invented in the 1980s <cit.>, they were used for more small-scale tasks such as character recognition <cit.>. The potential of CNNs to re-shape the field of artificial intelligence was not fully realized until AlexNet <cit.> was introduced in the ImageNet <cit.> competition. Further advancements to CNN architectures have been made improving their accuracy, efficiency, and speed <cit.>. Along with CNN architectures, pure multi-layer perceptron (MLP) architectures and MLP-like architectures have also shown promise as backbones for general-purpose vision tasks <cit.> Though CNNs and MLPs had become widely used in computer vision, the field of natural language processing used recurrent neural networks (RNNs), specifically long-short term memory (LSTM), networks due to the disparity between the tasks of vision and language <cit.>. Though LSTMs are still used, they have largely been replaced with transformer architectures in NLP tasks <cit.>. With the introduction of Vision Transformer (ViT) <cit.> a network architecture applicable to both language and vision domains was introduced. By splitting an image into a sequence of patch embeddings an image can be transformed into an input usable by transformer modules <cit.>. One of the major advantages of the transformer architecture over CNNs or MLPs is its global receptive field, allowing it to learn from distant object interactions in images. Graph neural networks (GNNs) have developed to operate on graph-based structures such as biological networks, social networks, or citation networks <cit.>. GNNs have even been proposed for tasks such as node classification <cit.>, drug discovery <cit.>, fraud detection <cit.>, and now computer vision tasks with the recently proposed Vision GNN (ViG) <cit.>. In short, ViG divides an image into patches and then connects the patches through the K-nearest neighbors (KNN) algorithm <cit.>, thus providing the ability to process global object interactions similar to ViTs. Research in computer vision for mobile applications has seen rapid growth, leading to hybrid architectures using CNNs for learning spatially local representations and vision transformers (ViT) for learning global representations <cit.>. Current ViG models are not suited for mobile tasks, as they are inefficient and slow when running on mobile devices. The concepts learned from the design of CNN and ViT models can be explored to determine whether CNN-GNN hybrid models can provide the speed of CNN-based models along with the accuracy of ViT-based models. In this work, we investigate hybrid CNN-GNN architectures for computer vision on mobile devices and develop a graph-based attention mechanism that can compete with existing efficient architectures. We summarize our contributions as follows: * We propose a new graph-based sparse attention method designed for mobile vision applications. We call our attention method Sparse Vision Graph Attention (SVGA). Our method is lightweight as it does not require reshaping and incurs little overhead in graph construction as compared to previous methods. * We propose a novel mobile CNN-GNN architecture for vision tasks using our proposed SVGA, max-relative graph convolution <cit.>, and concepts from mobile CNN and mobile vision transformer architectures <cit.> that we call MobileViG. * Our proposed model, MobileViG, matches or beats existing vision graph neural network (ViG), mobile convolutional neural network (CNN), and mobile vision transformer (ViT) architectures in terms of accuracy and/or speed on three representative vision tasks: ImageNet image classification, COCO object detection, and COCO instance segmentation. To the best of our knowledge, we are the first to investigate hybrid CNN-GNN architectures for mobile vision applications. Our proposed SVGA attention method and MobileViG architecture open a new path of exploration for state-of-the-art mobile architectures and ViG architectures. This paper is structured as follows. Section 2 covers related work in the ViG and mobile architecture space. Section 3 describes the design methodology behind SVGA and the MobileViG architecture. Section 4 describes experimental setup and results for ImageNet-1k image classification, COCO object detection, and COCO instance segmentation. Lastly, Section 5 concludes the paper and suggests future work with ViGs in mobile architecture design. § RELATED WORK ViG <cit.> is proposed as an alternative to CNNs and ViTs due to its capacity to represent image data in a more flexible format. ViG represents images through using the KNN algorithm <cit.>, where each pixel in the image attends to similar pixels. ViG achieves comparable performance to popular ViT models, DeiT <cit.> and SwinTransformer <cit.>, suggesting it is worth further investigations. Despite the success of ViT-based models in vision tasks, they are still slower when compared to lightweight CNN-based models <cit.>, in contrast CNN-based models lack the global receptive field of ViT-based models. Thus, ViG-based models may be a possible solution by providing speeds faster than ViT-based models and accuracies higher than CNN-based models. To the best of our knowledge, there are no works on mobile ViGs at this time; however, there are many existing works in the mobile CNN and hybrid model space. We classify mobile architecture designs into two primary categories: convolutional neural network (CNN) models and hybrid CNN-ViT models, which blend elements of CNNs and ViTs. The MobileNetv2 <cit.> and EfficientNet <cit.> families of CNN-based architectures are some of the first mobile models to see success in common image tasks. These models are lightweight with fast inference speeds. However, purely CNN-based models have steadily been replaced by hybrid competitors. There are a vast number of hybrid mobile models, including MobileViTv2 <cit.>, EdgeViT <cit.> LeViT <cit.>, and EfficientFormerv2 <cit.>. These hybrid models consistently beat MobileNetv2 in image classification, object detection, and instance segmentation tasks, but some of these models do not always perform as well in terms of latency. The latency difference can be tied to the inclusion of ViT blocks, which have traditionally been slower on mobile hardware. To improve this state of affairs we propose MobileViG, which provides speeds comparable to MobileNetv2<cit.> and accuracies comparable to EfficientFormer <cit.>. § METHODOLOGY In this section, we describe the SVGA algorithm and provide details on the MobileViG architecture design. More precisely, Section 3.1 describes the SVGA algorithm. Section 3.2 explains how we adapt the Grapher module from ViG <cit.> to create the SVGA block. Section 3.3 describes how we combine the SVGA blocks along with inverted residual blocks for local processing to create MobileViG-Ti, MobileViG-S, MobileViG-M, and MobileViG-B. §.§ Sparse Vision Graph Attention We propose Sparse Vision Graph Attention (SVGA) as a mobile-friendly alternative to KNN graph attention from Vision GNN <cit.>. The KNN-based graph attention introduces two non-mobile-friendly components, KNN computation and input reshaping, that we remove with SVGA. In greater detail, the KNN computation is required for every input image, since the nearest neighbors of each pixel cannot be known ahead of time. This results in a graph with seemingly random connections as seen in Figure <ref>a. Due to the unstructured nature of KNN, the authors of <cit.> reshape the input image from a 4D to 3D tensor, allowing them to properly align the features of connected pixels for graph convolution. Following the graph convolution, the input must be reshaped from 3D back to 4D for subsequent convolutional layers. Thus, KNN-based attention requires the KNN computation and two reshaping operations, both of which are costly on mobile devices. To remove the overhead of the KNN computation and reshaping operations, SVGA assumes a fixed graph, where each pixel is connected to every K^th pixel in its row and column. For example, given an 8×8 image and K=2, the top left pixel would be connected to every second pixel across its row and every second pixel down its column as seen in Figure <ref>b. This same pattern is repeated for every pixel in the input image. Since the graph has a fixed structure (i.e., each pixel will have the same connections for all 8×8 input images), the input image does not have to be reshaped to perform the graph convolution. Instead, it can be implemented using rolling operations across the two image dimensions, denoted as roll_right and roll_down in Algorithm <ref>. The first parameter to the roll operation is the input to roll, and the second is the distance to roll in the right or down direction. Using the example from Figure <ref>b where K=2, the top left pixel can be aligned with every second pixel in its row by rolling the image twice to the right, four times to the right, and six times to the right. The same can be done for every second pixel in its column, except by rolling down. Note that since every pixel is connected in the same way, the rolling operations used to align the top left pixel with its connections simultaneously align every other pixel in the image with its connections. In MobileViG, graph convolution is performed using max-relative graph convolution (MRConv). Therefore, after every roll_right and roll_down operation, the difference between the original input image and the rolled version is computed, denoted as X_r and X_c in Algorithm <ref>, and the max operation is taken element wise and stored in X_j, also denoted in Algorithm <ref>. After completing the rolling and max-relative operations, a final Conv2d is performed. Through this approach, SVGA trades the KNN computation for cheaper rolling operations, consequently not requiring reshaping to perform the graph convolution. We note that SVGA eschews the representation flexibility of KNN in favor of being mobile friendly. §.§ SVGA Block We insert SVGA and the updated MRConv layer into the Grapher block proposed in Vision GNN <cit.>. Given an input feature X∈ℝ^N × N, the updated Grapher is expressed as 1 Y=σ(MRConv(XW_in))W_out+X where Y∈ℝ^N × N, W_in and W_out are fully connected layer weights, and σ is a GeLU activation. We also change the number of filter groups from 4 (the value used in Vision GNN <cit.>) to 1 in the MRConv step to increase the expressive potential of the MRConv layer without a noticeable increase in latency. The updated Grapher module is visually depicted in Figure <ref>d Following the updated Grapher, we use the feed-forward network (FFN) module as proposed in Vision GNN <cit.> and shown in Figure <ref>e The FFN module is a two layer MLP expressed as 2 Z=σ(XW_1)W_2+Y where Z∈ℝ^N × N, W_1 and W_2 are fully connected layer weights, and σ is once again GeLU. We call this combination of updated Grapher and FFN an SVGA block, as shown in Figure <ref>c. §.§ MobileViG Architecture The MobileViG architecture shown in Figure <ref>a is composed of a convolutional stem, followed by three stages of inverted residual blocks (MBConv) with an expansion ratio of four for local processing as proposed in MobileNetv2 <cit.>. Within the MBConv blocks, we swap ReLU6 for GeLU as it has been shown to improve performance in computer vision tasks <cit.>. The MBConv blocks consist of a 1×1 convolution plus batch normalization (BN) and GeLU, a depth-wise 3×3 convolution plus BN and GeLU, and lastly a 1×1 convolution plus BN and a residual connection as seen in Figure <ref>b. Following the MBConv blocks we have one stage of SVGA blocks to capture global information as seen in Figure <ref>a. We also have a convolutional head after the SVGA blocks for classification. After each MBConv stage, a downsampling step halves the input resolution and expands the channel dimension. Each stage is composed of multiple MBConv or SVGA blocks, where the number of repetitions is changed depending on model size. The channel dimensions and number of blocks repeated per stage for MobileViG-Ti, MobileViG-S, MobileViG-M, and MobileViG-B can be seen in Table <ref>. § EXPERIMENTAL RESULTS We compare MobileViG to ViG <cit.> and show its superior performance in terms of latency, model size, and image classification accuracy on ImageNet-1k <cit.> in Table <ref>. We also compare MobileViG to several mobile models and show that, for each model, it has superior or comparable performance in terms of accuracy and latency in Table <ref>. §.§ Image Classification We implement the model using PyTorch 1.12 <cit.> and Timm library <cit.>.We use 8 NVIDIA A100 GPUs to train each model, with an effective batch size of 1024. The models are trained from scratch for 300 epochs on ImageNet-1K <cit.> with AdamW optimizer <cit.>. Learning rate is set to 2e-3 with cosine annealing schedule. We use a standard image resolution, 224 × 224, for both training and testing. Similar to DeiT <cit.>, we perform knowledge distillation using RegNetY-16GF <cit.> with 82.9% top-1 accuracy. For data augmentation we use RandAugment, Mixup, Cutmix, random erasing, and repeated augment. We use an iPhone 13 Mini (iOS 16) to benchmark latency on NPU and GPU. The models are compiled with CoreML and latency is averaged over 1000 predictions <cit.>. As seen in Table <ref>, for a similar number of parameters, MobileViG outperforms Pyramid ViG <cit.> both in accuracy and GPU latency. For example, for 3.5 M fewer parameters, MobileViG-S matches Pyramid ViG-Ti in top-1 accuracy, while being 2.83× faster. Additionally, for 0.6 M fewer parameters, MobileViG-B beats Pyramid ViG-S by 0.5% in top-1 accuracy, while being 2.08× faster. When compared to mobile models in Table <ref>, MobileViG consistently beats every model in at least NPU latency, GPU latency, or accuracy. MobileViG-Ti is faster than MobileNetv2 with 3.9% higher top-1 accuracy. It also matches EfficientFormerv2 <cit.> in top-1 while having a slight edge in NPU and GPU latency. MobileViG-S is nearly 2x faster than EfficientNet-B0 <cit.> in NPU latency and has 0.5% higher top-1 accuracy. Compared to MobileViTv2-1.5 <cit.>, MobileViG-M is over 3x faster in NPU latency and 2x faster in GPU latency with 0.2% higher top-1 accuracy. Additionally, MobileViG-B is 6x faster than DeiT-S and is able to beat both DeiT-S and Swin-Tiny in top-1 accuracy. §.§ Object Detection and Instance Segmentation We evaluate MobileViG on object detection and instance segmentation tasks to further prove the potential of SVGA. We integrate MobileViG as a backbone in the Mask-RCNN framework <cit.> and experiment using the MS COCO 2017 dataset <cit.>. We implement the backbone using PyTorch 1.12 <cit.> and Timm library <cit.>, and use 4 NVIDIA RTX A6000 GPUs to train our models. We initialize the model with pretrained ImageNet-1k weights from 300 epochs of training, use AdamW <cit.> optimizer with an initial learning rate of 2e-4 and train the model for 12 epochs with a standard resolution (1333 X 800) following the process of Next-ViT, EfficientFormer, and EfficientFormerV2 <cit.>. As seen in Table <ref>, with similar model size MobileViG outperforms ResNet, PoolFormer, EfficientFormer, and PVT in terms of either parameters or improved average precision (AP) on object detection and/or instance segmentation. The medium size MobileViG-M model gets 41.3 APbox, 62.8 APbox when 50 Intersection over Union (IoU), and 45.1 APbox when 75 IoU on the object detection task. MobileViG-M gets 38.1 APmask, 60.1 APmask when 50 IoU, and 40.8 APmask when 75 IoU for the instance segmentation task. The big size MobileViG-B model gets 42.0 APbox, 64.3 APbox when 50 IoU, and 46.0 APbox when 75 IoU on the object detection task. MobileViG-B gets 38.9 APmask, 61.4 APmask when 50 IoU, and 41.6 APmask when 75 IoU on the instance segmentation task. The strong performance of MobileViG on object detection and instance segmentation shows that MobileViG generalizes well as a backbone for different tasks in computer vision. The design of MobileViG is partly inspired by the designs of Pyramid ViG <cit.>, EfficientFormer <cit.>, and the MetaFormer concept <cit.>. The results achieved in MobileViG demonstrate that hybrid CNN-GNN architectures are a viable alternative to CNN, ViT, and hybrid CNN-ViT designs. Hybrid CNN-GNN architectures can provide the speed of CNN-based models along with the accuracy of ViT models making them an ideal candidate for high accuracy mobile architecture designs. Further explorations of hybrid CNN-GNN architectures for mobile computer vision tasks can improve on the MobileViG concept and introduce new state-of-the-art architectures. § CONCLUSION In this work, we have proposed a graph-based attention mechanism, Sparse Vision Graph Attention (SVGA), and MobileViG, a competitive mobile vision architecture that uses SVGA. SVGA does not require reshaping and allows for the graph structure to be known prior to inference, unlike previous methods. We use inverted residual blocks, max-relative graph convolution, and feed-forward network layers to create MobileViG, a hybrid CNN-GNN architecture, that achieves competitive results on image classification, object detection, and instance segmentation tasks. MobileViG outperforms existing ViG models and many existing mobile models, including MobileNetv2, in terms of accuracy and latency. Future research on mobile architectures can further explore the potential of GNN-based models on resource-constrained devices for IoT applications. ieee_fullname
http://arxiv.org/abs/2307.03019v1
20230706143153
Quasiperiodic oscillations around hairy black holes in Horndeski gravity
[ "Javlon Rayimbaev", "Konstantinos F. Dialektopoulos", "Furkat Sarikulov", "Ahmadjon Abdujabbarov" ]
gr-qc
[ "gr-qc", "astro-ph.CO" ]
[email protected] [email protected] [email protected] [email protected] Institute of Fundamental and Applied Research, National Research University TIIAME, Kori Niyoziy 39, Tashkent 100000, Uzbekistan Akfa University, Milliy Bog Street 264, Tashkent 111221, Uzbekistan National University of Uzbekistan, Tashkent 100174, Uzbekistan Tashkent State Technical University, Tashkent 100095, Uzbekistan Department of Physics, Nazarbayev University, 53 Kabanbay Batyr Avenue, 010000 Astana, Kazakhstan Laboratory of Physics, Faculty of Engineering, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece School of Mathematics and Natural Sciences, New Uzbekistan University, Mustaqillik Ave. 54, Tashkent 100007, Uzbekistan Ulugh Beg Astronomical Institute, Astronomy str. 33, Tashkent 100052, Uzbekistan Inha University in Tashkent, Ziyolilar 9, Tashkent 100170, Uzbekistan Quasiperiodic oscillations around hairy black holes in Horndeski gravity Javlon Rayimbaev e1, addr11,addr12,addr13,addr14 Konstantinos F. Dialektopoulos e2, addr21, addr22 Furkat Sarikulov e4,addr30,addr31,addr33 Ahmadjon Abdujabbarov e3,addr13,addr14,addr31 Received: date / Accepted: date ============================================================================================================================================================================================== Testing gravity theories and their parameters using observations is an important issue in relativistic astrophysics. In this context, we investigate the motion of test particles and their harmonic oscillations in the spacetime of non-rotating hairy black holes (BHs) in Hordeski gravity, together with astrophysical applications of quasiperiodic oscillations (QPOs). We show possible values of upper and lower frequencies of twin-peak QPOs which may occur in the orbits from innermost stable circular orbits to infinity for various values of the Horndeski parameter q in relativistic precession, warped disk models, and three different sub-models of the epicyclic resonant model. We also study the behaviour of the QPO orbits and their position relative to innermost stable circular orbits (ISCOs) with respect to different values of the parameter q. It is obtained that at a critical value of the Horndeski parameter ISCO radius takes 6M which has been in the pure Schwarzschild case. Finally, we obtain mass constraints of the central BH of microquasars GRS 1915+105 and XTE 1550-564 at the GR limit and the possible value of the Horndeski parameter in the frame of the above-mentioned QPO models. The analysis of orbits of twin peak QPOs with the ratio of upper and lower frequencies 3:2, around the BHs in the frame of relativistic precession (RP) and epicyclic resonance (ER4) QPO models have shown that the orbits locate close to the ISCO. The distance between QPO orbits and ISCO is obtained to be less than the error of the observations. § INTRODUCTION Quasiperiodic oscillations (QPOs) are astrophysical phenomena corresponding to (several) peaks observed in radio-to-X-ray bands of the electromagnetic spectrum. Twin-peaked QPOs in microquasars can be observed through the process of matter accreting into neutron stars, white dwarfs, or black holes (BHs) <cit.>. One may distinguish two types of such twin peak QPOs: high frequency (HF) corresponding to the frequency range from 0.1 to 1 kHz and low frequency (LF) with a frequency less than 0.1 kHz. The QPOs with several peaks can be observed in low-mass X-ray binaries (LMXBs) where one (or both) component(s) consists of neutron stars <cit.>. Spectral analysis has also shown that twin-peaked QPOs and QPOs with several peaks differ from each other. Therefore, separate models must be considered to describe the origin of QPOs in different scenarios  <cit.>. Spectral behaviour of QPOs and their temporal variability may be helpful in determining/measuring magnetic field properties in the accretion disk of neutron stars and BHs in microquasars <cit.>. Despite the fact that there are many research works devoted to the high-accuracy measurements of the QPO frequencies and testing gravity theories, the unique astrophysical model explicitly describing the behaviour of QPOs has not been yet proposed. One of the promising models for explaining the generation of QPOs is connected with the dynamics of test particles around BHs and their harmonic oscillations in the radial, vertical, and azimuthal directions. Thus, using data from QPOs detected in microquasars <cit.> one may test the spacetime around BHs. Moreover, investigations of QPO models based on the orbital motion of test particles might enable us to study the inner edge of the accretion disk surrounding BHs. Our previous studies have shown that QPO orbits are close to the innermost stable circular orbit (ISCO) of particles <cit.>. This implies that using the QPO analysis, one may estimate the values of the mass of the central BH and its spin (or/and other parameters). Moreover, the studies are also helpful in determining which theory of gravity plays the dominant role in spacetime around the central BHs of the microquasars <cit.>. Being well tested and justified in weak and strong field regimes, general relativity needs to be further updated/modified in order to resolve its shortcomings, such as the nature of the dark sector, singularities, the value of the cosmological constant, the H_0 tension and more. A very bright example is that general relativity meets the singularity issue during gravitational collapse, and it requires the presence of so-called dark energy to explain the accelerated expansion of the present universe <cit.>. Among the various ways to modify general relativity, scalar tensor theories such as the Horndeski theory of gravity <cit.> reflect a special interest in astrophysics due to their advantages. Particularly, one of the interesting features of the theory is that even though its Lagrangian contains second-order derivatives of the metric and the scalar field, the equations of motion contain up to second-order derivatives of the metric and a scalar field (see, e.g.  <cit.>). The field equations of the Horndeski theory have been obtained using the variation principle applied to the action containing the metric and a scalar field (which has a form of scalar-tensor models having Galilean symmetry in flat spacetime <cit.>) and contain all symmetries of general relativity  <cit.>. The effects of Horndeski gravity in strong field regimes near gravitating compact objects <cit.> and in large cosmological scales <cit.> have been extensively studied. The gravitational lensing by BHs in Horndeski's theory in weak field limits has been investigated by the authors of Ref. <cit.>. Furthermore, Horndeski gravity has been formulated in the teleparallel geometry <cit.>, and apart from the richer phenomenology that the theory presents, it has been shown that terms that were severely constrained from GW170817, can be revived in this framework <cit.>. Other applications in the teleparallel framework can be found in Refs. <cit.> as well as in the reviews <cit.>. BHs in Horndeski gravity have a nontrivial scalar field profile, which is commonly called hair. BH solutions within Horndeski gravity have been obtained in Refs. <cit.>, particularly, the solutions with a radially dependent hairy scalar field have been studied in Refs. <cit.>. Further analysis of the theory and corresponding solutions of Horndeski gravity have been intensively studied in Refs. <cit.>. Additionally, testing Horndeski gravity using observational data from the size of the shadow of rotating supermassive BH M87* by EHT collaboration, has been widely studied in <cit.> and relationships between the spin of the BH and the Horndeski parameters have been obtained. Authors in Ref. <cit.> have provided detailed analyses on gravitational lensing by Horndeski black holes and applied the calculations to several astrophysical supermassive BHs. Moreover, studies of mass to radius relation of neutron stars within Horndeski gravity have been investigated in Ref. <cit.>. In this paper, we plan to study the motion of the test particles around the hairy BH and its application to describe the QPOs. The paper is organized as follows: In Sect. <ref> we review the hairy BH solution. Sec. <ref> is devoted to studying the motion of the test particles around the hairy BH in Horndeski gravity. The fundamental frequencies associated with circular orbits of the particles have been studied in Sect. <ref>. The application of particle motion and fundamental frequencies to QPO analysis has been provided in Sect. <ref>. We conclude our results in Sec. <ref>. Throughout this paper, we use the (–, +, +, +) signature for the spacetime metric and system of units where G=1=c. § HAIRY BLACK HOLE IN HORNDESKI GRAVITY Horndeski gravity is a modification of general relativity, being the most general scalar-tensor theory in four dimensions that leads to second-order field equations <cit.>. Its action is described by 𝒮 = ∫ d^4x √(-g)∑ _i=2^5 L_i , where L_i are the following Lagrangian and X = -1/2∇ _μϕ∇ ^μϕ L_2 = G_2(ϕ,X) , L_3 = -G_3(ϕ,X)□ϕ , L_4 = G_4(ϕ,X)R + G_4,X(ϕ,X)[(□ϕ)^2 - ∇_μ∇_νϕ∇^μ∇^νϕ] , L_5 = G_5(ϕ,X)G_μν∇^μ∇^νϕ - 1/6G_5,X(ϕ,X) × [(□ϕ )^3 + 2∇_ν∇_μϕ∇^ν∇^λϕ∇_λ∇^μϕ - 3□ϕ∇_μ∇_νϕ∇^μ∇^νϕ] . G_i (ϕ, X) are arbitrary functions of the scalar field and its kinetic term. Following <cit.> we study a subclass of the above theory that considers G_i to be only functions of the kinetic term, i.e. G_i(X). In addition, G_5(X) = 0. The field equations can be obtained by varying the action (<ref>) with respect to the metric, G_4(X)G_μν=T_μν where T_μν = 1/2(G_2,X∇ _μϕ∇ _νϕ + G_2 g_μν) + 1/2 G_3,X( ∇ _μϕ∇ _νϕ□ϕ - g_μν∇ _α X ∇ ^αϕ + 2∇ _(μ X ∇ _ν)ϕ) - G_4,X[ ∇ _γ∇ _μϕ∇ ^γ∇ _νϕ- - ∇ _μ∇ _νϕ□ϕ + 1/2 g_μν( (□ϕ)^2 - (∇ _α∇ _βϕ )^2 - 2 R_σγ∇ ^σϕ∇ ^γϕ) -R/2∇ _μϕ∇ _νϕ + 2R_σ (μ|∇ ^σϕ∇ _|ν)ϕ + R_σνγμ∇ ^σϕ∇ ^γϕ] - G_4,XX × [ g_μν( ∇ _α X ∇ ^αϕ□ϕ +∇ _α X ∇ ^α X) + 1/2∇ _μϕ∇ _νϕ × ((∇ _α∇ _βϕ)^2 - (□ϕ)^2 ) - ∇ _μ X ∇ _ν X - 2□ϕ∇ _(μ X∇_ν )ϕ - ∇ _γ X × ( ∇ ^γϕ∇ _μ∇ _νϕ - 2 ∇ ^γ∇ _(μϕ∇ _ν)ϕ) ]. The finite four-current j^μ, which identifies the invariance of the scalar field under shift symmetry can be defined as, j^μ=1/√(-g)δ S/δϕ _,μ . and in our case, it reads j^ν = - G_2,Xϕ ^,ν - G_3,X (ϕ ^,ν□ϕ + X^,ν) - G_4,X (ϕ^,νR - 2R^νσϕ_,σ) - G_4,XX[ϕ^,ν( (□ϕ)^2 - ∇_α∇_βϕ∇^α∇^βϕ) + 2 (X^,ν□ X - X_,μ∇^μ∇^νϕ) ] . For the metric ds^2 = - A(r) dt^2 + 1/B(r)dr^2 + r^2 dΩ ^2 , the non-vanishing component of the above current takes the form j^r = -G_2,X B ϕ ' - G_3,X4A+rA'/2rAB^2 ϕ^'2 + 2 G_4,XB/r^2 A[ (B-1)A + r BA']ϕ' - 2 G_4,XXB^3 (A+rA')/r^2Aϕ^'3 , where ' denotes differentiation with respect to the radial coordinate. For simplicity, we set G_2 = α _21 X + α _22(-X)^ω _2 , G_3 = α _31(-X)^ω _3 , G_4 = 1/8π + α _42(-X)^ω _4 . For hairy solutions to exist, we set α _21 = α _31 = 0 , ω _2 = 3/2 , ω _4 = 1/2 , and for imposing j^r = 0 we obtain ϕ' = ±2/r√(-α _42/3 B α _22) . In order to see the complete derivation, check <cit.>. From the metric equations <ref> we get A(r) = B(r) = 1-2 M/r+q/rlnr/2 M, with q being a constant q = (2/3)^3/2κ ^2 α _42√(-α_42/α_22) . For the scalar field to satisfy the energy conditions, the expression in the square root in q (and thus Eq. <ref>) should be positive definite, otherwise, the scalar field would be imaginary. This means that α _42 has to be negative or α _22, but not both at the same time. Summarizing, the geometry around a hairy BH in Horndeski gravity can be described by the following spacetime ds^2=-F(r)dt^2+1/F(r)dr^2+r^2(dθ^2+sin^2θ dϕ^2), with the metric function defined as F(r)=1-2 M/r+q/rlnr/2 M, where M is the BH mass and q is the scalar charge with the dimension of length related to the non-trivial scalar field. From Fig. <ref> it is clearly seen that the metric (<ref>) always has the horizon at the Schwarzschild radius (r=2M) irrespective of the value of the parameter q. Moreover, BH has two horizons when -2<q/M<0. The event horizon radius of the BH can be found using g_rr→∞ or, say, g^rr=0 which reduces F(r)=0. One can easily see from the expression of F(r) that there are two event horizons in the spacetime of the hairy BH: outer and inner. The outer one is the event horizon and the inner one is called the Cauchy horizon. The outer horizon is 2M for the values of the parameter q/M from -2 to 0, and the inner horizon vanishes at q=0 and at q=-2M and these two horizons coincide with each other at r=2M. With an increase in the parameter q, the Cauchy horizon decreases (see Fig. <ref>). The expression of the inner horizon has the form, r=q ProductLog(2M/q e^2M/q), where ProductLog(z), for arbitrary z, is defined as the principal solution of the equation We^W = z. § TEST PARTICLE MOTION AROUND HAIRY BLACK HOLE In this section, we consider the dynamics of electrically neutral test particles around a hairy BH in Horndeski gravity using the following Lagrangian for the test particles L_p = 1/2 m g_μνẋ^μẋ^ν. where m is the mass of the test particle and overdot stands for the derivative with respect to proper time τ. It is worth noting that x(τ) is the particle worldline, parametrized by the proper time τ and the particle's four-velocity, u_μ is defined as u^μ = dx^μ/dτ. Due to the symmetry of the spacetime around a spherically symmetric BH, one may directly obtain two integrals of motion: energy E and angular momentum L in the form E = -u_μξ^μ, ṫ= E/F(r) , L = u_μη^μ, ϕ̇= L/r^2 sin^2θ , where ξ^μ and η^μ are the Killing vectors associated with time-translation and rotational invariance, respectively. E=E/m and L=L/m in Eqs. (<ref>)-(<ref>) stand for specific energy and angular momentum. Equations of motion for the test particle are then governed by the normalization condition g_μνu^μ u^ν=ε , where ε equals 0 and -1 for massless and massive particles, respectively. For the massive particles' equation of motion governed by timelike geodesics of spacetime and the equations of motion can be found by using Eq. <ref>. Taking into account Eqs. (<ref>)-(<ref>) one may obtain the equations of motion in the separated and integrated form as ṙ^2 = E - F(r) (1+ K/r^2), θ̇^2 = 1/g_θθ^2( K- L^2/sin^2θ) , where K denotes the Carter constant corresponding to the total angular momentum. Restricting the motion of the particle to a constant plane, in which θ = const and θ̇=0, thus, the Carter constant takes the form K = L^2/sin^2θ and the equation of the radial motion can be expressed in the form: ṙ^2= E^2-V_ eff , where the effective potential of the radial motion reads V_ eff = F(r) (1+ L^2/r^2 sin ^2θ). Now, we apply standard conditions for the circular motion, which corresponds to zero radial velocity ṙ = 0 and acceleration r̈ = 0. One can obtain the expressions of the specific angular momentum and the specific energy for circular orbits at the equatorial plane (θ = π/2) in the following form: L^2 = r^2 2 M-q (lnr/2 M-1)/q (3 lnr/2 M-1)+2 r-6 M , E^2 = 2(q lnr/2 M+r-2 M)^2/r [q (3 lnr/2 M-1)+2 r-6 M] . Figure <ref> demonstrates radial profiles of specific energy and angular momentum of test particles along circular stable orbits around a hairy BH in Horndeski gravity, for the different values of parameter q. It is seen from the figure that the specific angular momentum and its minimum value increase due to the presence of the parameter q. On the other hand, specific energy decreases with the decrease of parameter q. Note that the metric (<ref>) reduces to Schwarzschild when q = 0. One can easily see from Fig. <ref> that the value of the radius of marginally stable circular orbits of test particles, which may be also referred to as the radius of the photonsphere, increases with the increase of the parameter q. In order to determine the radius of the photonsphere we solve the radial geodesic equation, q (1-3 lnr/2 M)+6 M-2 r=0 . In Fig. <ref> we provide the photonsphere radius around hairy BHs in Horndeski gravity as a function of the parameter q. One may easily see from the figure that as the parameter q goes from -2M to zero, the radius of the photonsphere decreases from about 4.3M to 3M. §.§ Innermost stable circular orbits The stable circular orbits occur at the radius r=r_min where the minimum of the effective potential takes place. The innermost stable circular orbit corresponds to ∂_rrV_ eff=0 and/or ∂_rL=0 which leads to the same results. After some algebraic simplifications, the equation for the ISCO radius of test particles is obtained in the following form: 6 M+3 q+r-3 q lnr/2 M +(q+r) (3 q+2 r)/-3 q lnr/M+6 M+q+q ln (8)-2 r=0 . As we mentioned above that the solution of this equation with respect to radial coordinates implies ISCO radius. However, due to the complicated form of Eq. (<ref>) it is hard to solve it analytically with respect to r. In order to analyse the behaviour of ISCO radius, we present the numerical results of the dependence of ISCO radius r_ ISCO from q/M in Fig. <ref>. Figure <ref> represents the dependence of ISCO radius from the parameter q. It is observed from Fig. <ref> that the radius of ISCO, first, decreases with increasing parameter q, reaches the minimum value, (r_ ISCO)_ min≈ 5.7846 M at q/M=-0.615 and then increases again up to 6M. For q=0 and q=-1.14M, the ISCO radius equals to 6M, which covers the result for the Schwarzschild BH case. This implies that in these values of the parameter q it has a degeneracy behaviour. § FUNDAMENTAL FREQUENCIES In this section, we provide derivations of an expression for the fundamental frequencies governed by the particle orbiting around the hairy BH in Horndeski gravity. In particular, we explore frequencies of Keplerian orbits and the radial & vertical (to the orbital plane) oscillations, which are helpful in investigations of QPO models. §.§ Keplerian frequencies The angular velocity of the particles orbiting around the BH measured by an observer located at infinity is called the Keplerian frequency Ω_K=dϕ/dt and can be expressed as Ω_K = √(-∂_r g_tt/∂_r g_ϕϕ)=√(F'(r)/2r). The expression of the frequency in the Horndeski spacetime metric given in Eq.(<ref>) takes the following form Ω_K^2= M/r^3+q/2r^3(1-lnr/2M) . Furthermore, to express the frequencies in Hz we use the following equation: ν_K,r,θ=c^3/2π GMΩ_K,r,θ , where c and G are the speed of light in a vacuum and the gravitational (Newtonian) constant. Figure <ref> demonstrates radial profiles of the frequencies of Keplerian orbits of test particles around hairy BHs in Horndeski gravity for different values of the parameter q. It is observed that the increase of q causes to decrease of the Keplerian frequency up to the distance of about (4.43-4.45)M. However, far from this range, the frequency decreases slower due to the presence of the parameter q. §.§ Harmonic oscillations We consider a test particle to oscillate along the radial, angular, and vertical axes in its stable orbits around a static BH in the equatorial plane due to the small displacement from the orbits as r_0+δ r and π/2+δθ. One can calculate the frequencies of the radial and vertical oscillations measured by a distant observer using harmonic oscillator equations <cit.>: d^2δ r/dt^2+Ω_r^2 δ r=0 , d^2δθ/dt^2+Ω_θ^2 δθ=0 , where Ω_r^2=-1/2g_rr(u^t)^2∂_r^2V_ eff(r,θ) |_θ=π/2 , Ω_θ^2=-1/2g_θθ(u^t)^2∂_θ^2V_ eff(r,θ) |_θ=π/2 , are the frequencies of the radial and vertical oscillations, respectively. After some algebraic calculation and simplifications, we immediately have expressions for the frequencies in the spacetime of static BHs expressed as follows <cit.>: Ω_r^2 = Ω^2_K(1-6 M/r + q/r[3 lnr/2 M-r-2M-q ln (r/2 M)/q (1+ln (r/2 M))+2 M]) , Ω_θ = Ω_ϕ = Ω_K . The radial profiles of the frequencies of the radial oscillations of particles around a hairy BH in Horndeski gravity are shown in Fig. <ref> for the various values of the parameter q. It is found that the maximum value of the frequency increases with the decrease of q up to q=-0.6M and then decreases back. § ASTROPHYSICAL APPLICATIONS: QPOS This section is devoted to exploring possible values of frequencies of twin-peak QPOs around hairy BH in Horndeski gravity using various QPO models, in particular, to compare q parameter effects on the upper and lower frequencies with the effects of the spin of rotating Kerr BH <cit.>. In addition, we also focus on determining the relationship between the mass of the hairy BH and the parameter q using their observational frequency data from QPOs. We also consider that the BHs at the center of the microquasars GRS 1915+105 <cit.> and XTE 1550-564 <cit.> are hairy ones. §.§ QPO models In this subsection, we plan to study the possible values of the upper and lower frequencies by the following models for twin peak HF QPOs described by the fundamental frequencies of test particles around compact gravitating objects (ν_r,θ,ϕ=c^3/(2π GM)Ω_r,θ,ϕ): * Relativistic precession (RP) model has been proposed by Stella & Vietri <cit.> for kHz twin peak QPOs corresponding to the frequencies in the range from 0.2 to 1.25 kHz from neutron stars in LMXRBs. Later, it has been shown that the model is also applicable to BH candidates in the binary systems of BH and neutron stars <cit.>. RP model has been further developed by Ingram <cit.> in order to obtain precise measurements of the mass and spin of central BH in microquasars using data from the power-density spectrum of the BH accretion disk. According to the RP model, the upper and lower frequencies are described by the frequencies of the radial, vertical and orbital oscillations in the forms ν_U=ν_ϕ and ν_L=ν_ϕ-ν_r, respectively. * The epicyclic resonance (ER) model considers resonances of the axisymmetric oscillation modes of a thin accretion disc around BHs <cit.>. The frequencies of the disc oscillation modes are related to the frequencies of orbital and epicyclic oscillations of the circular geodesics of the test particles. Here, we use the variations of ER model: ER2, ER3, and ER4 which differ in their oscillation modes. The corresponding upper and lower frequencies in ER2-4 models, defined as ν_U=2ν_θ-ν_r & ν_L=ν_r, ν_U=ν_θ+ν_r & ν_L=ν_θ and ν_U=ν_θ+ν_r & ν_L=ν_θ-ν_r, respectively  <cit.>. * The warped disc (WD) model assumes non-axisymmetric oscillatory modes of a thin accretion disc around BHs and neutron stars <cit.>. In the WD model, the upper and lower frequencies are defined as ν_U=2ν_ϕ-ν_r, ν_L=2(ν_ϕ-ν_r)  <cit.>. The vertical oscillatory frequency ν_θ has been introduced by the assumptions of vertical axial symmetric oscillations of the accretion disc that cause the disc to warp. In Fig. <ref> we demonstrate the diagram ν_U-ν_L for twin-peak QPOs around the hairy BHs in the RP, ER2-4, and WD models together with the comparisons of the QPOs around rotating Kerr BHs with the spin parameter a/M=0.1. In plotting the figure, we have taken the value of the central black hole mass M=5M_⊙ as a test stellar mass black hole. To obtain units of the frequencies in Hz we use Eq.(<ref>). The light-blue shaded area in the top-right and bottom panels and orange-shaded area in the top left and middle panels in the diagram imply the graveyard for twin peak QPOs. Any twin-peaked QPOs cannot be observed in that area. The inclined lines bordering the areas are deathlines for the twin-peak QPOs where the upper and lower frequencies are equal to each other and the two peaks in the twin-peak QPOs merge into a single peak. This implies that if a QPO position falls down under the deadline in the graveyard, then the QPO object disappears from observation. The diagram shows that the ratio of the upper and lower frequencies increases with negative values of the parameter q is approximately q≃-0.5M. However, for values less than q=-0.5M the ratio decreases. Our numerical comparisons have shown that the parameter q can mimic the spin of Kerr BH up to about a=0.1M with its value q≃-0.55M in all the models considered providing the same values for upper and lower frequencies in twin-peaked QPOs. This means that both a Kerr black hole and a hairy black hole can produce almost identical QPO frequencies by testing the surrounding particles. From this point of view, it is not possible to distinguish the two cases using the QPO analyzes. It requires, additionally, other independent types of observational data and theoretical detailed analysis. So, there is a degeneracy between the spin of Kerr black holes and the parameter of static hairy black holes. §.§ QPO orbits In this subsection, we study relationships between the parameter q and the radius of orbits where a QPO shines in the RP, ER2-4, and WD models by constructing the following equation for the ratio of upper and lower frequencies, 3ν_L(M,r,q)=2ν_U(M,r,q) , 4ν_L(M,r,q)=3ν_U(M,r,q) , 5ν_L(M,r,q)=4ν_U(M,r,q) . One may get the following equation for the relationships between the radius of orbits where the QPO appears with the ratio 3:2 and the parameter q, by substituting Eqs. (<ref>), (<ref>) and (<ref>) into Eq. (<ref>) in RP model: 6 M/r-q/r[3 lnr/2 M-r-2M-q lnr/2 M/q (1+lnr/2 M)+2 M]=2/3. Due to the complex form of the Eq. (<ref>) it is impossible to get the analytical expression for the radius. However, one can perform a numerical analysis of the dependence of the radius r on q. Similar analyses on the QPO radius for the frequency ratios 4:3 & 5:4 in WD & ER QPO models can be performed. In Fig. <ref> we demonstrate the QPO radius as a function of the parameter q. One can see from Fig. <ref> that the QPO orbits are located outside the ISCO. One can see from Fig. <ref> that the orbits of QPOs with 5:4 are closer to the central object than the other ratios. Thus, if a twin peak QPO shines in an orbit close to ISCO, the peak frequencies of the QPO become close to each other. This means that if a twin peak QPO generates at ISCO by test particles, the two peaks unify and the upper and lower frequencies become equal to each other. Moreover, the distance between the QPO orbit and ISCO depends on the QPO model and the Horndeski parameter q. In RP, WD, and ER3,4 models, the QPO orbits are located near ISCO. However, in the ER2 model, the orbits are quite far from the ISCO. It is also observed from Fig.<ref> that the QPO profiles at q/M=-0.5 and -1 lie under the line corresponding to the Schwarzschild case (q=0), while in cases when q/M=-1.5 and q/M=-2 the lines appear under the Schwarzschild curve. One can explain the strange feature of QPO profiles using the similar behaviour of QPO orbits with ISCO <cit.>. One can see from Fig.<ref> that there is a critical value in the Horndeski parameter, q_ cr/M=-1.14834 where ISCO radius equals 6M. That implies this case, the Horndeski BH mimics the Schwarzchild one, providing the same ISCO radius as well as QPO orbits. When |q|<|q_ cr| the ISCO around the Horndeski BH is less than 6M (see Fig.<ref>) and QPO orbits come close to 6M, positioning inside the QPO orbits in Schwarzschild case. It causes the ratio of QPO frequencies to be higher than the ratio in the Schwarzschild case. While at q<q_ cr the QPO orbits lie further 6M where a twin peak QPO can be generated with frequencies smaller than the Schwarzschild case. Now, we show the distance between the ISCO and QPO orbits (δ=r_ QPO-r_ ISCO) for the selected the QPOs observed in the microquasars GRS 1915+105 and XTE 1550-564, assuming the central BH is a hairy BH in Horndeski gravity. The frequencies of the QPO sources in the microquasars GRS 1915+105 and XTE 1550-564 are ν_U=168± 5 Hz & ν_L=113± 3 Hz, and ν_U=184 ± 5Hz & ν_L=276 ± 3 Hz <cit.>. Figure <ref> represents how far the QPO observed orbit from ISCO around a hairy BH in Horndeski gravity in RP, ER2-4, and WD models and its dependence on the parameter q. In this figure, we use the upper and lower frequencies of the QPO object in GRS J1915+105 and XTE 1550-564 microquasars in the left and right panels, respectively. One can easily see from the figure that the QPO orbits in RP and ER4 models are close to ISCO, while the orbits in ER3 and WD models quite far, and they are very close to each other. However, the QPO orbits located about 30-45M far from the central BH in the ER2 model, when q=-2M. It is also observed that the distances, δ in the WD and ER3 models, are very close to each other. That shows the physical mechanisms (oscillation modes) considered in these models, are similar. On the other hand, the behaviour of the distance with respect to the variation of the parameter q is also almost the same. One can see from this figure that an increase in the absolute value of the parameter q causes an increase in the distance δ. At the GR limit, where q=0, the distance is about δ≃ 0.72 M in the RP model, and while in the ER4 model, it is about 0.25M. The distances δ, calculated in the RP and ER4 models, consisting of about 4-7 % of ISCO radius, and it is in the order of the errors of the ISCO measurements. That means the radii of the QPO orbits are almost equal to the ISCO radius. In fact, ISCOs are one of the most important properties of BHs. From this point of view, QPO studies, in the frame of RP and ER4 models, may help to solve problems of ISCO measurements in astrophysical observations of BHs. However, the presence of the parameter reduces the distance bigger than the errors. Thus, one may conclude that the QPO studies in RP and ER4 models can help to solve the problem in the measurements of ISCO radius at the values of the Horndeski parameter q near the GR limit. §.§ BH mass constraints using QPO frequencies In this section, we obtain constraints on the mass and the parameter q of the hairy BH at the center of the microquasars GRS 1915+105 and XTE 1550-564, graphically. However, we can not find exact values of the BH mass and q parameters at once due to a lack of numbers of equations. In order to get the relationship between the BH mass and the parameter q, we set equations for the upper and lower frequencies using the QPO radius (as a function of the hairy parameter) which can be obtained numerically in the following form: ν_L(r,q,M)=ν_L^ob , ν_U(r,q,M)=ν_U^ob , where ν_L^ob and ν_U^ob are observational data of the lower and upper frequencies. Then, we solve Eq.(<ref>) in the power-law form r̃=a_nq̃^n, where r̃=r/M, q̃=q/M and a_n are dimensionless constants corresponding to the values of n. Then, we will put the relation back to Eq.(<ref>) for each observed QPO in the above-mentioned models. Consequently, we can get two equations with two unknowns. One can get numerical values for the hairy BH for different values of q. Thus, we provide the relationship of both mass and q parameters for the above-mentioned microquasars in Fig. <ref> considering the BH at the center of the microquasars are hairy ones. Our numerical calculations have shown that the BH mass does not exist, taking the imaginary values at q/M_⊙<-0.25 for GRS 1915+105, while for XTE 1550-564 at q/ M_⊙<-0.4. That means the hairy parameter q can not be less than -0.25M_⊙ for BH in GRS 1915+105, and for the BH in XTE 1550-564 the lower limit for the possible value of this parameter is -0.4M_⊙. One can see from the figure that at q=-0.25M_⊙ the obtained masses of the hairy BH at the center of the microquasar GRS 1915+105 in the RP, WD, and ER3-4 models are in the order of the error in measurements. In order to get constraints on the BH mass, we first, solve Eqs. (<ref>) with respect to the normalized radius to the black hole mass r/M as a function of normalized q/M parameter numerically, using frequency data from the above-mentioned objects for the above-mentioned QPO models. Then, we obtain the dependence of radii of QPOs with the ratio ν_U^ob:ν_L^ob from the parameter q/M by fitting the numerical solutions, and again, we put back the fitted dependencies into Eq.(<ref>) to get equations with two variables: M and q. Finally, we find numerical values of the mass of the hairy BHs at the microquasars GRS 1915+105 and XTE 1550-564 using observational values of QPO frequencies in these microquasars for two cases: q=0 (Schwarzschild limit) and the limiting values of q and present the obtained results in Tabs. <ref> and <ref>. From the obtained numerical results shown in Tabs. <ref> and <ref> one can easily see that the BH mass is almost the same in WD and ER3 models, and the ER4 model is not suitable for the studies of twin peak QPOs GRS in the microquasars GRS 1915+105 & XTE 1550-564. The optical spectroscopic observations of the microquasar system XTE J1550-564 have shown that the BH in this system is about (6.86± 0.71)M_⊙ <cit.>. The infrared spectroscopic analysis of Very Large Telescope data from GRS 1915+105 in the K band shows that the BH mass in GRS 1915+105 is found in Ref. <cit.> as (8.0± 0.6) M_⊙ and by Ref. <cit.> as (9.5 ± 3.0) M_⊙. § CONCLUSIONS In this paper, we have studied the motion of test particles around hairy BHs in Horndeski gravity. It is obtained that the spacetime (<ref>) describes a non-rotating, static BH if the parameter q takes the values from -2M to 0. Specifically, the spacetime splits into two main branches, with the first one being an analytical extension of the Schwarzschild solution and the other one containing two horizons; an exterior event horizon and an interior Cauchy one, which encloses the singularity. More details about the structure of spacetime, together with Penrose diagrams can be found in Ref. <cit.>. We have studied the specific energy and angular momentum of particles in circular stable orbits, and shown that both the energy and the angular momentum increase as the value of the parameter q increases from 0 to -2M. The study of the minimum radius of circular orbits and ISCOs has also shown that their values increase with respect to the decrease of the Horndeski parameter. Unlikely, it is observed that ISCO reaches its minimum at q/M=-0.62 and the minimum in the ISCO radius is about r=5.785M. It is obtained by studies of Keplerian orbits of test particles orbiting a hairy BH that parameter q causes to decrease the Keplerian frequency up to the distance about (4.43-4.45)M, then its effect becomes vice versa. We have investigated QPOs around hairy BHs as an application of harmonic oscillations in the RP, WD, and ER2-4 models, and the possible values of the upper and lower frequencies of twin-peak QPOs together with the radius of the QPO orbits with frequency ratios 3:2,4:3 and 5:4. It is found that the QPO orbits and ISCO are close to each other in RP and ER4 models. That means the ISCO measurement problem can be solved by the studies of twin peak QPOs in the frame of RP and ER4 models. Finally, we have constrained the mass of the central BH in the microquasars GRS 1915+105 and XTE 1550-564 using frequency data from the QPO objects in the GR limit and the presence of the parameter q. It is observed that the mass constraints of the BH in GRS 1915+105 at, q/M_⊙∈ (-0.25, 0) and it is in the case of XTE 1550-564 q/M_⊙∈ (-0.4, 0). It is shown that the original mechanism of the QPO objects can not be considered in ER2 models. J.R. FAA and AAA acknowledge the financial support for this work from Grant No. F-FA-2021-510 of the Ministry of Innovative Development of Uzbekistan. J.R. A.A. and F.S. acknowledge the ERASMUS+ ICM project for supporting their stay at the Silesian University in Opava. The work was also supported by Nazarbayev University Faculty Development Competitive Research Grant No. 11022021FD2926 and by the Hellenic Foundation for Research and Innovation (H.F.R.I.) under the “First Call for H.F.R.I. Research Projects to support Faculty members and Researchers and the procurement of high-cost research equipment grant” (Project Number: 2251). This article is based upon work from COST Action CA21136 Addressing observational tensions in cosmology with systematics and fundamental physics (Cosmo Verse) supported by COST (European Cooperation in Science and Technology). spphys
http://arxiv.org/abs/2307.00596v1
20230702153458
On the role of the Integrable Toda model in one-dimensional molecular dynamics
[ "Giancarlo Benettin", "Giuseppe Orsatti", "Antonio Ponno" ]
math-ph
[ "math-ph", "math.DS", "math.MP", "nlin.SI" ]
http://arxiv.org/abs/2307.04633v1
20230704183718
Realizing late-time cosmology in the context of Dynamical Stability Approach
[ "Anirban Chatterjee", "Saddam Hussain", "Kaushik Bhattacharya" ]
gr-qc
[ "gr-qc", "astro-ph.CO" ]
1*]Anirban Chatterjee 1]Saddam Hussain 1]Kaushik Bhattacharya [1]Department of Physics, Indian Institute of Technology Kanpur, Kanpur 208016, India [*]Address correspondence to: [email protected] Realizing late-time cosmology in the context of Dynamical Stability Approach [ ============================================================================ We examine the scenario of non-minimally coupled relativistic fluid and k-essence scalar field in a flat Friedmann-Lemaitre-Robertson-Walker universe. By adding a non-minimal coupling term in the Lagrangian level, we study the variation of Lagrangian with respect to independent variables, which produces modified scalar field and Friedmann equations. Using dynamical stability approach in different types of interaction models with two types of scalar field potential, we explore this coupled framework. Implementing detailed analysis, we can conclude our models can able to produce stable late-time cosmic acceleration. § INTRODUCTION Standard model of cosmology (Λ-CDM model) <cit.>, mainly suffers from two drawbacks, first one is the fine-tuning problem and second one is a cosmic-coincidence problem. In this standard model of cosmology, Λ represents the cosmological constant and CDM denotes the cold-dark matter. Another important downside of the Λ-CDM model from the observational perspective is the discrepancy between the local measurement of present observed value of Hubble’s constant and the value predicted from the Planck experiment using Λ-CDM model <cit.>. These fundamental discrepancies motivate us to study different kinds of cosmological models found in the non-minimally coupled field-fluid sectors <cit.>,<cit.>,<cit.>. Based on these above considerations, we build a theoretical framework for a coupled field-fluid sector, where field sector is made of a non-canonical scalar field (k-essence sector <cit.>,<cit.>) and the fluid sector is composed of pressure-less dust. The non-minimal coupling term is introduced at the Lagrangian level. We employ the variational approach <cit.> with respect to independent variables that produce modified k-essence scalar field equations and the Friedmann equations. Then we analyze this coupled field-fluid framework explicitly using the dynamical system technique <cit.>, considering two forms of the scalar field potential, viz. inverse power-law type <cit.> and constant type <cit.>. After examining these scenarios, both models can produce accelerating attractor solutions and satisfy adiabatic sound speed conditions. § THEORETICAL FRAMEWORK Total action for this non-minimally coupled field-fluid sector <cit.> can be written as, S = ∫_Ω d^4x [√(-g)R/2κ^2-√(-g)ρ(n,s) + J^μ(φ_,μ + sθ_,μ + β_Aα^A_,μ) -√(-g)ℒ(ϕ,X). .-√(-g)f(n,s,ϕ,X)], The first term corresponds to the action's gravitational part, and the second and third terms represent the action related to the relativistic fluid sector. Fourth term implies the action for the k-essence scalar field. Finally, the last term describes the action for the non-minimal coupling. Non-minimal coupling term (n,s,ϕ,X) depends on the functions of both sectors of field (ϕ, X) and fluid (n,s). Varying the grand action respect to g_μν, we get the total energy-momentum tensor for this coupled system T_μν^ tot.=T_μν^(ϕ) + T_μν^(M) + T_μν^( int). Total energy-momentum tensor has been conserved, but the individual components have not been conserved. Variation of above action with respect to the independent variables produce the modified field equation as, ℒ_,ϕ + ∇_μ (ℒ_,X∇^μϕ) + f_,ϕ + ∇_μ (f_,X∇^μϕ)=0. In the context of a flat-FLRW metric (ds^2 = -dt^2 +a(t)^2 d x^2), the above equation can be redundant for both potential and kinetic terms dependent scalar field case <cit.> as, [ ℒ_,ϕ + f_,ϕ] - 3Hϕ̇[ ℒ_,X + f_,X] + X (P_ int +f ) (3H ϕ̇) - ϕ̈[(ℒ_,X + f_,X) + 2X (ℒ_,XX + f_,XX) ] -ϕ̇^2 (ℒ_,ϕ X + f_,ϕ X ) = 0. And for purely kinetic dependent scalar field <cit.> as, - 3Hϕ̇[ ℒ_,X + f_,X] + X (P_ int +f ) (3H ϕ̇) - ϕ̈[(ℒ_,X + f_,X) + 2X (ℒ_,XX + f_,XX) ] = 0. § RESULTS & DISCUSSION Utilizing some dimensionless variables, we recast field, fluid, and interaction sectors in terms of them. Friedmann equations are also modified in terms of these variables and acted like constraint equations of the dynamical system. Depending on the total number of used independent variables dimension of the phase space has been decided. Depending on the dimension of the phase space, we have divided our analysis into two cases. * I. Algebraic Coupling with arbitrary scalar field potential: To study this case, we can choose the form of interaction <cit.>, f = ρα(ϕκ)^m β X^n (α, β, m, n are constant). Using linear stability approach, we get a total of eight critical points from the phase space analysis, in which two sets are stable and other two sets are showing saddle type nature. From Evolutionary dynamics of this coupled system suggest that a late-time stable accelerating phase can be achieved through this non-minimal coupling. Transfer of energy from field-to-fluid and finally fluid-to-field can also be observed. Total EOS parameter of this coupled sector saturates near -1 at late-time era. At present epoch energy density of dark matter and dark energy are in same order of magnitude, also observed from here. * II.Algebraic Coupling with constant scalar field potential: The form of interaction for this case <cit.> has been chosen f= g V_0 ρ^q X^β M^-4q (where, q=-1, g, V_0, β are constant). Due to the absence of the potential term the dimension of the phase space is reduced into 2-D. Utilizing the phase space analysis of dynamical system, we have also found one stable critical for this type of system. Phase space is constrained by the modified Friedmann equation, accelerating universe, and sound speed condition. Evolution plot suggests that late-time stable accelerating phase can be achieved and an energy transfer between field to fluid sector has also been observed through this framework of non-minimal coupling. Total EOS parameter of this coupled sector saturates near -1 at late-time era.
http://arxiv.org/abs/2307.00310v1
20230701115156
Gradients Look Alike: Sensitivity is Often Overestimated in DP-SGD
[ "Anvith Thudi", "Hengrui Jia", "Casey Meehan", "Ilia Shumailov", "Nicolas Papernot" ]
cs.LG
[ "cs.LG", "cs.AI", "cs.CR", "stat.ML" ]
Gas phase Elemental abundances in Molecular cloudS (GEMS) Carlos M. R. Rocha1 Octavio Roncero2 Niyazi Bulut3 Piotr Zuchowski4 David Navarro-Almaida5 Asunción Fuente6 Valentine Wakelam7 Jean-Christophe Loison8 Evelyne Roueff9 Javier R. Goicoechea2 Gisela Esplugues6 Leire Beitia-Antero6 Paola Caselli10 Valerio Lattanzi10 Jaime Pineda10 Romane Le Gal11,12 Marina Rodríguez-Baras6 Pablo Riviere-Marichalar6 =========================================================================================================================================================================================================================================================================================================================================================================================== Differentially private stochastic gradient descent (DP-SGD) is the canonical algorithm for private deep learning. While it is known that its privacy analysis is tight in the worst-case, several empirical results suggest that when training on common benchmark datasets, the models obtained leak significantly less privacy for many datapoints. In this paper, we develop a new analysis for DP-SGD that captures the intuition that points with similar neighbors in the dataset enjoy better privacy than outliers. Formally, this is done by modifying the per-step privacy analysis of DP-SGD to introduce a dependence on the distribution of model updates computed from a training dataset. We further develop a new composition theorem to effectively use this new per-step analysis to reason about an entire training run. Put all together, our evaluation shows that this novel DP-SGD analysis allows us to now formally show that DP-SGD leaks significantly less privacy for many datapoints. In particular, we observe that correctly classified points obtain better privacy guarantees than misclassified points. § INTRODUCTION Differential Privacy (DP) is the standard framework for private data analysis <cit.>; it formulates privacy as an upper bound on the worst-case privacy leakage of an algorithm. Making an algorithm differentially private limits the success any attack can have in knowing whether any data point was or was not an input to the algorithm given just the outputs of the algorithm. To obtain this notion of indistinguishability, which the guarantees of DP stem from, the algorithm being considered needs to perform a noisy analysis of the data. The amount of noise required is typically calibrated to how sensitive the algorithm is to changes made to any of the individual datapoints being analyzed. In the case of deep learning, the canonical private training algorithm is DP-SGD <cit.>, DP-SGD introduces Gaussian noise to gradients computed on individual training examples. To minimize the impact of noise on training convergence, much work has gone into improving the privacy analysis used to derive differential privacy guarantees for DP-SGD <cit.>. However, it was recently shown that the privacy analysis for DP-SGD is tight <cit.>; there exist datasets and models for which it is possible to find an attack whose success matches the upper-bound on privacy leakage given by the current analysis. To arrive at this result, the datasets being attacked were artificially constructed. In contrast, when training on common benchmark datasets like CIFAR10, <cit.> empirically saw that even strong privacy attacks perform significantly worse for many datapoints than suggested by the analysis of DP-SGD. That is, it has been empirically suggested that the privacy leakage of a particular datapoint when training with a specific dataset is significantly lower than the worst-case guarantees of DP-SGD. Until now, this remained a conjecture. For instance, <cit.> drew these conclusions based on instantiating attacks which need not be conclusive given that attacks can always improve. Our work reconciles the tight worst-case analysis of DP-SGD with these empirical results. The privacy analysis for DP-SGD is data independent: it assumes an upper-bound on how much any individual datapoint from any dataset can contribute to a gradient update—a quantity known as the algorithm's sensitivity. To upper-bound sensitivity, a value is set ahead of time and enforced during training by clipping the gradient computed on each datapoint to a norm below this preset value. We observe that this overestimates the sensitivity of DP-SGD to a specific datapoint in a given dataset when it is likely we would have computed a similar update using a different datapoint in this dataset. In deep learning, many mini-batches in a dataset do produce similar gradients <cit.>. Hence, we propose an alternative privacy analysis of DP-SGD that is data-dependent, i.e., states the privacy guarantee of an individual datapoint with respect to training on a specific dataset, to be capable of capturing this similarity between different datapoints. Let us first focus on an individual update of DP-SGD. Intuitively, if many of the datapoints produced the same (or almost the same) gradient, then with high probability we would have obtained the same updated model with or without one of these datapoints. Making this intuition rigorous, we introduce a class of distributions we call sensitivity distributions: broadly they capture the difference between updates computed from a given mini-batch to sampling another mini-batch. From this, we derive new bounds on the privacy leakage of a single DP-SGD update that incorporates how concentrated these distributions are at small values, i.e., have many mini-batches that produce almost the same gradient. Next, we give a data-dependent bound on the overall privacy leakage of a full DP-SGD run. The current analysis considers the model that leaks the most privacy at every step (the worst-case model) and notes that this worst-case leakage simply composes (i.e., adds) during training. Yet, the sensitivity distributions are heavily dependent on the model being updated: e.g., there is a difference in gradients computed using a partially-trained model and a randomly-initialized model. Towards not relying on analyzing worst-case models, intuitively it should not matter what the privacy leakage of the worst-case models is if they are unlikely to be reached. More rigorously, we develop a new composition theorem which allows us to upper-bound the overall data-dependent privacy leakage of a training run using the expected privacy leakage at each step during training. These analytic results give a new framework to understand the data-dependent privacy guarantees of DP-SGD for individual datapoints. However, it remains to verify whether this analysis can explain why many datapoints have better privacy when training on benchmark datasets. We thus turn to experimentation. The crux of implementing our results is to repeat training several times to compute the expected per-step privacy leakage. Because this one-dimensional statistic is bounded by the existing worst-case privacy analysis, one achieves non-trivial estimates with few samples. Doing this: * We show that for common benchmark datasets, the bounds we derived are significantly lower than what the classical analysis suggests. For some datapoints, we observe more than a magnitude improvement in the privacy guarantee ε. This explains prior results we motivated our work with: some datapoints exhibit stronger-than-expected privacy when training on these datasets despite the data-independent analysis of DP-SGD being tight. * In our framework, we observe a disparity where correctly-classified points obtain better privacy guarantees than misclassified points. In other words, training algorithms that lead to high-performing models quantifiably leak less privacy for many points. This is as they reach states that update with respect to large clusters of datapoints the same. We hypothesize that designing model architectures to be more performative may also make them more private. * In classical privacy analysis, training with higher sampling rates leaks more privacy. However we find that for certain update rules, training with higher sampling rates can give better data-dependent privacy because mini-batch updates concentrate on the dataset mean; this leads to many mini-batches with similar updates. The consequences of our work are far reaching: having better data-dependent privacy has implications for unlearning, generalization, and memorization because of how DP formulates privacy by preventing distinguishability between the models trained with or without a datapoint. For unlearning, strong data-dependent privacy implies that the models coming from training with a datapoint are indistinguishable to the models trained without it. For generalization and memorization, strong data-dependent privacy implies that the models coming from training without a datapoint perform similarly to those that had trained with it. With our framework, one can now say a specific datapoint does not need to be unlearned, or that a datapoint will not be memorized. § BACKGROUND §.§ DP-SGD Analysis DP-SGD <cit.> is a differentially private version of stochastic gradient descent (SGD). Given a dataset X, DP-SGD repeatedly computes the following deterministic update rule U(X_B = {x: x ∼ X with probability L/|X|}) = ∑_x ∈ X_B∇_θℒ(θ,x)/ max(1,||∇_θℒ(θ,x)||_2/C) and then does the update θ→θ - η1/L(U(X_B) + N(0,σ^2 C^2)). The current privacy analysis of DP-SGD uses Rényi-DP (RDP) <cit.>. An algorithm A is (α,ϵ)-Rényi DP if for all neighbouring datasets X, X' (i.e. hamming distance 1 apart) we have D_α(A(X)|| A(X')) ≤ϵ where for two probability distributions P,Q we define the α-Rényi divergence as D_α(P||Q) 1/α -1ln𝔼_x ∼ Q (P/Q)^α. For an introduction to typical DP definitions, we refer the interested reader to <cit.>. The analysis of the RDP guarantee of DP-SGD follows two steps: * Analyzing the privacy guarantee of each step η1/L(U(X_B) + N(0,σ^2 C^2)), which is the same as U(X_B) + N(0,σ^2 C^2) by the post-processing property of RDP * Understanding the privacy guarantee of releasing all the updates The first part was analytically studied in <cit.> and is called the sampled Gaussian mechanism. The analytic privacy cost of releasing all the updates follows from the composition theorem for RDP <cit.>. In this paper we provide new per-step and composition privacy analyses for DP-SGD that are specific to a pair of neighbouring datasets X,X'. <cit.> called this type of guarantee Per-Instance DP (though it is colloquially called "data-dependent DP"), and noted it directly inherits properties from normal DP such as post-processing. §.§ Fully Adaptive Composition A primary challenge in using our new analysis will be to do composition without taking the worst privacy leakage at every step to bound the overall data-dependent privacy leakage of a training run. Towards this, there has been recent work on "Fully Adaptive Rényi-DP Composition" <cit.>; these are composition theorems that take into account the random variable for the privacy leakage at every step (in our case, induced by the randomness of what the model is at the i'th step). <cit.> show that instead of using the worst-case per-step guarantee and adding up those to get an overall guarantee, one can instead consider the worst-case sum of per-step privacy leakages. Formally, in the context of DP-SGD this is the L_∞ norm of the distribution (over training runs) of the sum of per-step privacy leakages dependent on the current attained model (and not the worst-case model). Seeing if we can effectively apply this theorem for DP-SGD, we note it is first hard to measure what such a bound is for DP-SGD; we once again need to find a (hopefully small) constant that bounds the sum of our per-step guarantees almost surely. Furthermore, we may not even expect this to be significantly smaller than just adding up the worst per-step guarantees; it is imaginable that the training run goes to high privacy leaking states with non-zero probably at every step. Our hope, however, is that if the probability of going to these high-privacy leaking states is small (not necessarily 0), then the overall privacy leakage can be much smaller than the worst-case sum. We are not aware of how to adapt the proof of the Fully Adaptive Composition Theorem of <cit.> (Theorem 3.1 <cit.>) to consider this. Hence we will later derive a new Fully Adaptive Composition theorem. § A DATA-DEPENDENT ANALYSIS OF DP-SGD We now introduce our analysis of DP-SGD which surfaces a dependence on new random variables we term sensitivity distributions. Broadly, given two datasets, sensitivity distributions capture how similar the updates between mini-batches from either dataset are. In essence, we aim to capture the mean effect of introducing a datapoint to the mini-batch updates, rather than the worst-case effect, as done in the classical analysis of DP-SGD. We later verify that this analysis allows us to explain rigorously why many datapoints have much better privacy than suggested by the classical analysis. §.§ Introducing Sensitivity Distribution by the (ϵ,δ)-DP Case To first understand how one can incorporate sensitivity distributions to analyze DP-SGD, we look at the typical (ϵ,δ)-DP analysis for the sample Gaussian mechanism. Recall the sample Gaussian mechanism is defined as follows. Let A(X) = U(X) + N(0,σ) (i.e., the Gaussian mechanism), and X_B be a mini-batch Poisson sampled from the dataset X with individual probabilities ℙ_x(1).[We assume these are defined irrespective of the dataset, i.e., a datapoint always has the same sampling probability independent of the dataset it belongs to] Then the sampled Gaussian mechanism is defined as M(X) = A(𝐗_𝐁) where here we think of 𝐗_𝐁 as a random variable (unless otherwise stated we think of X_B, not bold-face, as a specific mini-batch). The typical (ϵ,δ)-DP analysis for this mechanism would use Δ_U = sup_neighbouring X,X' ||U(X) - U(X')||_2 to first reason about the privacy of A, and then do subsample boosting to get the privacy guarantees of M. However, one can forego these two disjoint steps when analyzing the privacy guarantees between a specific pair of datasets X, X' = X ∪{x^*}. To be specific, instead of Δ_U we will use a random variable, i.e., a sensitivity distribution, to derive privacy guarantees for M. Let Δ_U,x^*(X_B) = || U(X_B) - U({X_B ∪ x^*}) ||_2 be a function over mini-batches, which becomes a random variable induced by the sampling of X_B from a dataset X. To understand how this random variable will be used, note that there is a constant C_δ,σ [For example, one can take C_δ, σ = √(2 ln (1.25/δ))/σ] such that taking A(X) = U(X) + N(0,σ^2) we have ℙ(A(X_B) ∈ S) ≤ e^C_δ,σΔ_U,x^*(X_B)ℙ(A({X_B ∪ x^*}) ∈ S) + δ for any mini-batch X_B by the usual Gaussian mechanism analysis. When Δ_U,x^*(X_B) < Δ_U, we see this gives a tighter expression for the individual privacy between using the mini-batch X_B versus X_B ∪{x^*}, so we might hope to incorporate how concentrated Δ_U,x^*(X_B) is at small values into the privacy analysis of the sample Gaussian mechanism. With these definitions, we can derive an inequality bounding ℙ(M(X') ∈ S) by ℙ(M(X) ∈ S) to some power multiplied by certain constants depending on the sensitivity distribution. This is stated as Lemma <ref> in the Appendix. The core idea of the proof is to express sampling mini-batches from X' as sampling mini-batches from X and then sampling x^*; from the expression this gives for ℙ(M(X') ∈ S), one applies Holder's inequality to conclude the lemma statement. From Lemma <ref> one can obtain one of the inequalities needed to give a Per-Instance DP guarantee for X,X'. For p ∈ (1,∞), let a_p = ℙ_x^*(1) (𝔼_x_B(e^C_δ,σΔ_U,x^*(X_B)p))^1/p, then taking ϵ' = ln(a_p^1/1-1/pδ'^-1/p-1 + ℙ_x^*(0)) and δ” = ℙ_x^*(1)δ + δ' we have for X' = X ∪ x^*: ℙ(M(X') ∈ S) ≤ e^ϵ'ℙ(M(X) ∈ S) + δ” The proof strategy is analogous to Proposition 3 in <cit.> and is presented in Appendix <ref>. However, in attempting to extend this approach of reanalyzing per-step (ϵ,δ)-DP guarantees to analyze a full training run with DP-SGD, we have the issue of composition: how do these improvements at a per-step level translate to overall better guarantees for DP-SGD? To understand the difficulty of this, note that implicitly U depends on the current model state, and hence so does the sensitivity distribution. So we expect the sensivity distribution to evolve during training, and further have a distribution of sensitivity distribution at each step (first a distribution on the model we could have at that training step, and then the sensitivity distributions for each of those models). It is not clear to us how to handle this complexity when directly studying (ϵ,δ)-DP guarantees. For this reason, we now move to studying Rényi-DP guarantees, which we show allows us to handle this complexity. Note that typical analysis of DP-SGD already uses Rényi-DP. §.§ Rényi-DP Analysis for DP-SGD We now turn to analyzing the Rényi-DP guarantees of DP-SGD. We first adapt the per-step analysis of the sampled Gaussian mechanism to introduce sensitivity distributions. We then consider how to analyze adaptive composition to utilize sensitivity distributions. First note that we will primarily focus on integer values of α for Rényi-DP in our per-step analysis. This is largely for simplicity, knowing that as Rényi divergences are increasing in their order α, we can bound the guarantee for any α by the guarantee for ⌈α⌉. In terms of notation, we will use X'_B to be mini-batches from X', and X'_B^α to be an α tuple of mini-batches from X' (sampled independently unless otherwise stated). Analogously we use X_B^α to be an α tuple of mini-batches from X. §.§.§ A Data-Dependent Per-Step Rényi-DP We first state how to make the results of <cit.> data-dependent. In particular, we will restate them to be specific to the Rényi-DP guarantee between X and X' = X ∪{x^*}, and will incorporate Δ_U,x^* = sup_X_B ∼ XΔ_U,x^*(X_B) where Δ_U,x^*(X_B) is as defined in Section <ref>. For integer α > 1, the sampling Gaussian mechanism with noise σ and sampling probability ℙ_x^*(1) for x^* is (α,ϵ)-Rényi DP for: ϵ = 1/α -1ln(∑_k=0^αα k (1 - ℙ_x^*(1))^α -kℙ_x^*(1)^k exp(Δ_U,x^*^2(k^2 - k)/2 σ^2)) The proof strategy is to replace the sensitivity upper-bound (assumed to be 1) in the results of <cit.> with having the bound Δ_U,x^*, which is equivalent if we restrict the possible datasets to X and X'. The proof is in Appendix <ref>. While the purpose of our paper is to highlight the effect of having similar mini-batches, and Δ_U,x^* has no dependence on the mini-batches, it is actually the case that for some mechanisms Δ_U,x^*(X_B) = Δ_U,x^* ∀ X_B. For example, for any function g on datapoints, U(X_B) = ∑_x_i ∈ X_B g(x_i) is one such update rule. In this case, the above analysis may lead to lower privacy guarantees than considering the sensitivity distribution presented in the following subsection. This is as, here, we are effectively only ever comparing how similar U(X_B) is to U(X_B ∪{x^*}), whereas in the next subsection, we must compare how similar U(X_B) is to U(X_B') for any minibatch X_B' ∼ X': for some update rules U this may be significantly worse. In short, the exact sensitivity distribution one looks at to analyze an update rule U for given pair of datasets X,X' can have a meaningful impact on the data-dependent privacy guarantees we conclude, as will be seen experimentally. §.§.§ Mini-batch Effect on Rényi DP Having seen how to make the Rényi DP analysis of the sample Gaussian mechanism data-dependent by minimally adapting past analysis, we turn to make the analysis also dependent on the distribution of updates coming from mini-batch sampling. Such a dependence is given in the following theorem. The main intuition is to rely on convexity, which is always true for the second argument of Rényi divergences. We then also do direct calculations involving Gaussians. The proof is in Appendix <ref> Let α > 1 be an integer. The sampling Gaussian mechanism with noise σ and mini-batch sampling probability ℙ(X_B) (and ℙ(X_B') respectively for X') is (α,ϵ)-Rényi DP for: ϵ = max{∑_X_Bℙ(X_B) 1/(α-1)ln (∑_X'_B^αℙ(X'_B^α) e^-1/2σ^2(∑_i ||U(X'_B^i)||_2^2 - (α-1) ||U(X_B)||_2^2 - ||Δ_α(X'_B^α,X_B)||_2^2)) , ∑_X'_Bℙ(X'_B) 1/(α-1)ln (∑_X_B^αℙ(X_B^α) e^-1/2σ^2(∑_i ||U(X_B^i)||_2^2 - (α-1) ||U(X'_B)||_2^2 - ||Δ_α(X_B^α,X'_B)||_2^2)) } where Δ_α(X'_B^α,X_B) = (∑_i U(X'_B^i)) - (α - 1) U(X_B). In particular, the first and second element in the set upper-bounds D_α(M(X')||M(X)) and D_α(M(X)||M(X')) respectively. While the dependency to a sensitivity distribution is not as clear as in the previous cases, one can view ∑_i ||U(X'_B^i)||_2^2 - (α-1) ||U(X_B)||_2^2 - ||Δ_α(X'_B^α,X_B)||_2^2 as the sensitivity distribution in this case: it is a random variable over X_B ∼ X and X'_B^α∼ X'^α. However, the contribution of this random variable to the privacy guarantees of the sample Gaussian mechanism is through applying a transformation on its fixed X_B marginal values and taking its expectation over X_B. §.§.§ Fully Adaptive Composition over SGD Steps We now turn to composing our new per-step guarantees to get an overall data-dependent privacy bound for DP-SGD. However, typical DP and Rényi-DP composition theorems used for DP-SGD work by taking the worst-case privacy guarantee at every step and summing these up. The worst-case model at every step could effectively have a sensitivity distribution concentrated at large values at which point we do not expect our previous analyses to improve over the typical analysis of DP-SGD. As previously discussed in Section <ref>, it is also not clear how to utilize recent Fully Adaptive Composition theorems, which do not look at worst-case leakages at every step <cit.>, to analyze DP-SGD. In this subsection, we provide a new fully adaptive composition theorem for Markov Processes (which includes DP-SGD) by only looking at expected privacy losses at each step. The main trick is to rely on the Markov property to decompose the composition to a form where the expectation appears, and then applying Holder's inequality with the right powers such that induction follows. Let p ∈ (1,∞) and consider a sequence of functions X_1(x_1), X_2(x_1,x_2), X_3(x_2,x_3),⋯ X_n(x_n-1,x_n) where X_i is a density function in the second arugment for any fixed value of the first argument, except X_1 which is a densitiy function in x_1. Consider an analogous sequence Y_1(x_1),⋯, Y_n(x_n-1). Then letting X = ∏_j=1^n X_j be the density function for a sequence x_1,⋯,x_n generated according to the markov chain defined by X_i, and similarly Y, we have D_α(X || Y) ≤1/α -1 (∑_i=0^n-2(p-1)^i/p^i+1ln (𝔼_X_1,⋯ X_n-(i+1) ((e^(g_p^i(α) -1)D_g_p^i(α)(X_n-i|| Y_n-i))^p))) + 1/α -1 ((p-1)^n-1/p^n) ln ((e^(g_p^n-1(α) -1)D_g_p^n-1(α)(X_1|| Y_1))^p) where g_p(α) = p/p-1α - 1/p and g_p^i is g_p composed i times, where we defined g_p^0(α) = α To understand the dependence on p in Theorem <ref>, consider for a moment p =2. In this case, we observe that at the i'th step, we need to compute a Rényi divergence of order ∼ 2^iα. It is known that the Rényi divergence D_c(P||Q) grows with c <cit.>, and in the case of the Gaussian mechanism, this growth is linear with c <cit.>. Hence this exponential growth in the Rényi divergence order can prove impractical as a useful tool to analyze DP-SGD. However, as p →∞ we see that the growth on the order of the divergence shrinks. Balancing the p value. However, by taking larger p values we are effectively taking larger L_p-norms and so effectively turn to worst-case per-step analysis as p →∞. Hence it is desirable to choose p just sufficient for there to not be a significant blow-up in the order of the divergences for a given n. This can be done by analyzing how g_p^i(α) grows. Taking p = O(n) is sufficient for g_p^i(α) ≤ 2 α for all i ≤ n. In particular, p = 3n works for sufficiently large n. Estimating with a constant number of samples. In cases where one does not know the expectations used in Theorem <ref> analytically, e.g., DP-SGD when applied to deep learning, one can resort to empirically estimating the means. Our goal is to understand how much better our data-dependent guarantees are than the naive baseline for DP-SGD on commons datasets, hence we wish to estimate the expression of Theorem <ref> with a precision relative to the classical worst-case upper-bound. This can in fact be done with high probability with a constant (only depending on the worst-case upper-bound) number of samples using Hoeffding's inequality. The following fact focuses on estimating the i'th per-step guarantee with a precision relative to the worst-case per-step guarantee. In particular, we have the i'th per-step guarantee 1/α-1(p-1)^i/p^i+1ln (𝔼_X_1, ⋯, X_n-(i+1) f) 1/α-1(p-1)^i/p^i+1ln (𝔼_X_1,⋯ X_n-(i+1) ((e^(g_p^i(α) -1)D_g_p^i(α)(X_n-i|| Y_n-i))^p)) is less than the worst-case per-step privacy guarantee ϵ/n if 𝔼_X_1,⋯ X_n-(i+1) f ≤ e^(α-1) 3 ϵ for p = 3n. Hence we describe the number of samples needed to estimate 𝔼 f with precision relative to e^(α-1) 3 ϵ (with high probability), which can be done in a constant number of samples relative to the worst-case bound using Hoeffding's inequality. Let ϵ/n be the classical α-Rényi DP guarantee for the i'th step, and ϵ'/n be the analogous 2α-Rényi DP guarantee for the i'th step. Then for l ≥- ln(J)/c^2 e^6(α-1)ϵ - 3(2α -1) ϵ' and p = 3n with n s.t g_p^n-1≤ 2α, we have ℙ(|𝔼^l f - 𝔼f| ≥ c e^(α-1) 3 ϵ) ≤ J. Here 𝔼^l denotes the empirical mean over l samples. Applying to DP-SGD. To interpret Theorem <ref> in the context of DP-SGD, we can let X_i = be the distribution of the i'th model update (for a fixed (i-1)'th model) when training on one dataset D, and similarly Y_i when training on a neighbouring dataset D'. One can then get an upper-bound on the Rényi-DP guarantee of releasing all the updates (as is done in DP-SGD) by applying Theorem <ref> to get an upper-bound on both D_α(Train_DP-SGD(D)||Train_DP-SGD(D')) and D_α(Train_DP-SGD(D')||Train_DP-SGD(D)), where here we note Train_DP-SGD is just the Markov chain of the intermediate model updates. § EMPIRICAL RESULTS In Section <ref> we provided a new framework to analyze DP-SGD's data-dependent privacy guarantees. This followed by providing new per-step analyses (Theorem <ref> and <ref>), and a way to account for the overall privacy guarantee by summing the expected per-step guarantees (Theorem <ref>). We now highlight several conclusions our framework allows us to make about data-dependent privacy when using DP-SGD. For conciseness, we defer a subset of the experimental results to Appendix <ref>. Experimental Setup. In the subsequent experiments, we empirically verify our claimed guarantees on MNIST <cit.> and CIFAR-10 <cit.>. Unless otherwise specified, LeNet-5 <cit.> and ResNet-20 <cit.> were trained on the two datasets for 10 and 200 epochs respectively using DP-SGD, with a mini-batch size equal to 128, ϵ=10, δ = 10^-5, α = 8 (in cases of Renyi DP), and clipping norm C = 1.0. All the experiments are repeated 100 times by sampling 100 data points to obtain a distribution/confidence interval if not otherwise stated. Regarding hardware, we used NVIDIA T4 to accelerate our experiments. §.§ Many Datapoints have Better Privacy Here we compare the guarantees given by Theorem <ref> for analyzing the per-step guarantee in DP-SGD to the guarantee given by classical analysis (see Section 3.3 in <cit.>), and plot expectations over multiple trials starting from the same initial weights as is needed for Theorem <ref>. In particular, we take X to be the full MNIST training set, and randomly sample a data point x^* from the test set to create X' = X ∪ x^* (as mentioned earlier, we repeat the sampling of x^* 100 times to obtain a confidence interval). We train 10 different models on X with the same initialization and compute the expected per-step guarantees between X and X' over the training run, as plotted in Figure <ref>. We can see that our expected guarantee decreases with respect to the baseline as we progress through training. This persists regardless of the expected minibatch size, the strength of DP used during training, and model architectures; see Figure <ref> in Appendix <ref>. Hence, by Theorem <ref> we can conclude that the Rényi divergence D_α(Train_DP-SGD(X) || Train_DP-SGD(X')) is significantly less than the baseline for many data points. To see our improvement over the max of D_α(Train_DP-SGD(X) || Train_DP-SGD(X')) and D_α(Train_DP-SGD(X') || Train_DP-SGD(X)), i.e., the Rényi-DP guarantee, we computed the expectation when training on X and X'= X ∪{x^*} for 10 training points x^* where X is now the training set of MNIST with one point removed and X' is the full training set. Our results are shown in Figure <ref> where we see a similar decreasing trend relative to the baseline over training: we conclude by Theorem <ref> that many datapoints have better data-dependent Rényi DP than expected from the baseline. In other words, we conclude many datapoints have stronger data-dependent privacy guarantees than can be demonstrated through the classical data-independent analysis. However the previous figures only show the average effect over datapoints. In Figures <ref> and <ref> we plot the distribution of per-step guarantees over 500 data points in CIFAR10. The key observations are (a) there exists a long tail of data points with significantly better data-dependent privacy than the baseline, and (b) such improvements mostly exist in the middle and end of the training process. Next, we turn to understanding what datapoints are experiencing better privacy when using DP-SGD, i.e., the data points in the previously observed long-trail. In Figure <ref>, we plot the guarantee given by Theorem <ref> for correctly and incorrectly classified data points at different stages of training (beginning, middle, end) for varying architectures on CIFAR10 (and for MNIST in Figure <ref> in Appendix <ref>). We see that on average correctly classified data points have better per-step privacy guarantees than incorrectly classified data points across training with different architectures. It is also worth noting that this disparity holds most strongly towards the end of training. §.§ Higher Sampling Rates can be Better In normal SGD one computes a mean as the update rule for each step U(X_B) = 1/|X_B|∑_x ∈ X_B∇_θℒ(θ,x)/ max(1,||∇_θℒ(θ,x)||_2/C). However, DP-SGD computes a weighted sum U(X_B) = 1/L∑_x ∈ X_B∇_θℒ(θ,x)/ max(1,||∇_θℒ(θ,x)||_2/C). Note the subtle difference between dividing by a fixed constant L (typically the expected mini-batch size when Poisson sampling datapoints) and by the mini-batch size |X_B|. This means for the sum the upper-bound on sensitivity is C/L, while for the mean the upper-bound on sensitivity is only C (consider neighbouring mini-batches of size 1 and 2). Hence using the mean update rule requires far more noise and so is not practical to use. We highlight how our analysis by sensitivity distributions allows better analysis of the mean update rule. We compute the bound on D_α(M(X')||M(X)) and D_α(M(X)||M(X')) given by Theorem <ref> for the mean update rule, where we estimated the inner and outer expectation using 20 samples, i.e., 20 random X_B'^α (or X_B^α) for each of the 20 random X_B (or X_B'). We obtain Figure <ref> and <ref> by repeating this for 500 data points in CIFAR10 while varying the training stage. We observe that for both divergences, we beat the baseline analysis by more than a magnitude at the middle and end of training. We conclude Theorem <ref> gives us better Rényi DP guarantees for the mean update rule. Furthermore, counter-intuitively to typical subsample privacy amplification, in Figure <ref> we see that our bound decreases with increasing expected minibatch size: we attribute this to convergence to the mean, whereby increasing the expected minibatch size leads to sampled minibatches having similar updates more often and hence the sensitivity distribution concentrates at smaller values. An analogous result is shown for MNIST in Figure <ref> (in Appendix <ref>). § DISCUSSION AND CONCLUSION We now describe implications of the the data-dependent guarantees we derived for DP-SGD and also some questions that follow from our framework. Machine Unlearning. Given a training algorithm M, machine unlearning <cit.> applies an algorithm U such that U(M(X')) ≈ M(X) where X' = X ∪{x^*}. However, if we already know M(X') ≈ M(X), then one can argue U can just be the identity function, i.e., a no-operation. In the context of our data-dependent guarantees for DP-SGD, when we have a small (α,ϵ)-Rényi DP guarantee, we know that M(X') ≈ M(X) in the sense of Rényi Divergences. Hence, by doing nothing we have approximately unlearnt x^* where approximation is measured by Rényi divergences. Semantically, the models obtained by training with x^* are already quite likely to have been obtained if we had trained without x^* but still trained on the remaining dataset. Memorization and Generalization. The memorization of a training algorithm M for a datapoint x^* on a training dataset X' is defined as mem(M,X',x^*) = ℙ_h ∼ M(X')(h(x^*) is correct) - ℙ_h ∼ M(X' ∖ x^*)(h(x^*) is correct) <cit.>. However, this is bounded by post-processing once we have a data-dependent guarantee. Hence, when using our framework to conclude better data-dependent privacy guarantees for a particular datapoint x^*, we have also concluded this point is unlikely to be memorized. Note this definition of memorization is equivalent to asking whether we generalize to x^* when we train on X' ∖ x^*: lower memorization means high generalization. Towards understanding what characterizes points we are not likely to memorize, our theory shows that points whose gradients converge to 0 quickly when training with or without them are less likely to be memorized. In other words, points that are optimized quickly are generalized too when using DP-SGD. However, checking if the predictions on a point quickly converged (i.e., the point was optimized quickly) has also been empirically shown to lead to SOTA approaches for predicting which test inputs are correctly classified when using private (and not private) deep learning  <cit.>. In light of this connection, we believe improvements in selective classification, in terms of finding new behavior predictive of correctness, will come hand-in-hand with analytic benefits for individual privacy and vice-versa. Future work. We believe our composition Theorem <ref> needs not be tight: it relies on repeatedly applying Holder's inequality which introduces several potentially loose inequalities. Furthermore, our analysis for estimating the expectations (Fact <ref>) and setting p values (Fact <ref>) is not intended to be optimal for Theorem <ref>. In particular, we believe these two statistics are useful properties to compare future bounds with, representing the ability to empirically use them and their blow-up respectively. We have given a framework to concretely know when a datapoint has better data-dependent privacy guarantees, and by doing so, have also given a method of knowing what datapoints do not need to be unlearned and which datapoints are less likely to be memorized. However, in terms of providing privacy guarantees to users, data-dependent guarantees are limited to an adversary that can only observe a single pair of neighbouring datasets (or a small collection of similar datasets). This is sufficient to capture membership inference attacks used in the literature but such an assumption could be bypassed by an aware adversary. Hence understanding when data-dependent privacy is useful for typical users remains an open question. abbrvnat
http://arxiv.org/abs/2307.02569v1
20230705181104
Securing Cloud FPGAs Against Power Side-Channel Attacks: A Case Study on Iterative AES
[ "Nithyashankari Gummidipoondi Jayasankaran", "Hao Guo", "Satwik Patnaik", "Jeyavijayan", "Rajendran", "Jiang Hu" ]
cs.CR
[ "cs.CR" ]
B-.05emi-.025em b-.08em T-.1667em.7exE-.125emX noitemsep,topsep=0pt,parsep=0pt,partopsep=0pt - [2016/12/29] compat=1.17 B-.05emi-.025em b-.08em T-.1667em.7exE-.125emX
http://arxiv.org/abs/2307.01308v1
20230703192544
A one-dimensional mathematical model for shear-induced droplet formation in co-flowing fluids
[ "Darsh Nathawani", "Matthew Knepley" ]
physics.flu-dyn
[ "physics.flu-dyn" ]
[t]0.7title [t]0.7author date Shear-induced droplet formation is important in many industrial applications, where the primary focus is on droplet sizes and pinch-off frequency. We propose a one-dimensional mathematical model that describes the effect of shear forces on the droplet interface evolution. The aim of this paper is to simulate paraffin wax droplets in a co-flowing fluid using the proposed model to estimate the droplet volume rate for different flow velocities. Thus, the study focuses only on the dripping regime. This one-dimensional model has a single parameter that arises from the force balance on the interface. We use PETSc, an open-source solver toolkit, to implement our model using a mixed finite element discretization. The parameter is defined by cross-validation from previous computational and experimental data. We present the simulation results for liquid paraffin wax under fast-moving airflow with a range of velocities. § INTRODUCTION Droplet formation is a naturally occurring process in free-surface flows. However, it can be seen in many scientific and industrial applications, a few examples being ink-jet printing <cit.>, cell bioprinting <cit.>, spray cooling <cit.>, testing hydrophobic surfaces <cit.>, atomization and droplet entrainment in annular flow <cit.>, and microfluidics devices <cit.>. A proper understanding of droplet formation started to shape in 1833 when Savart experimented with fluid jets <cit.>. Savart concluded from his experiments that the breakup of a jet always happens regardless of inlet fluid velocity or radius, type of fluid (meaning density and viscosity), or direction of gravitational force. He postulated that it must be due to some intrinsic property of the fluid. But he could not identify it as the surface tension. The role of mean curvature and the surface tension force as a source of instability was explained by Laplace <cit.> and Young <cit.> in 1805. Plateau in 1863 found that the long wavelength perturbations on a fluid jet reduce the surface area, and they become unstable if the wavelength is greater than some critical value. However, he incorrectly concluded that all the wavelengths that reduce the surface energy are unstable <cit.>. Rayleigh was the first to demonstrate that droplet formation occurs in finite time due to the force of surface tension acting against inertia <cit.>. Hence, in the calculation of the critical wavelength, the fastest growth rate is selected. For the fluid column of radius h_0, his linear stability analysis of jet breakup predicted the critical wavelength λ_cr = 9.01 h_0, which agreed well with the experiment by Savart. After this initial impetus from research in droplet dynamics, the dynamics of a pendant drop received great attention from both mathematicians and scientists. Eggers and Villermaux collected a wonderful record of the advances in understanding the liquid break-up process <cit.>. As the drop becomes heavier by continuously adding fluid, gravity overcomes the surface tension and the instabilities start to grow on the interface. The surface tension minimizes the surface energy by decreasing the radius of the fluid column. A spherical droplet starts to form at the end of the fluid column hanging from a very thin neck. Eventually, the radius becomes zero, which is the “pinch-off” point, and a droplet separates from the initial fluid column. Immediately after the primary pinch-off, the “unbalanced” surface tension creates an impulse on the interface of the neck. The capillary waves due to this recoil perturb the interface before the tip can collapse back to the top of the fluid cone. This leads to a secondary breakup creating smaller droplets, which are called “satellite drops”, a phenomenon also observed in liquid bridges and decaying jets. In 1993, Jens Eggers established a scaling solution of the axisymmetric fluid neck that appears in a droplet breakup process <cit.>. He explained the universality of the singularity region and the self-similarity of the solution. He also derived the solution of the Navier-Stokes equations before and after the singularity, showing that the solution is characterized by a universal scaling exponent, thus proving the self-similarity <cit.>. This universal scaling behavior was also explored in singularities of the PDE, especially in interface flows, to explain that the local behavior of the equations is governed by scales that are independent of the initial and boundary conditions <cit.>. Eggers later explained that linear stability analysis provides a reasonable prediction for droplet sizes, although it fails to explain surface evolution once there is sufficient interface deformation <cit.>. This is bound to happen near the singularity due to dominant nonlinear behavior. Moreover, even the higher-order analysis is not able to explain the shape of the drop near singularity <cit.>. The research in droplet dynamics is mature <cit.> and the mathematics is well-validated by the experiments <cit.>. For instance, the one-dimensional model using the finite difference method by Eggers and Dupont <cit.> for gravity-induced droplets was sufficiently accurate to describe the primary pinch-off process. The recent investigation of the same model using the finite element method with a self-consistent algorithm was validated with the experiments <cit.>. Almost all applications of droplet generation require control of certain quantities of interest for the separated droplet, like volume, frequency of droplet generation, surface area, average moving velocity, etc. The emulsion process <cit.> and atomization process <cit.> come to light when listing the applications of shear-force-induced droplet formation. The emulsion process, where the dispersed phase fluid is injected into the continuous phase fluid to produce monodispersed droplets, has been a very interesting matter from both experiments and modeling perspectives. Umbanhowar et al. <cit.> described an experimental technique to generate a monodisperse emulsion with ≤ 3% polydispersity. Cramer et al. <cit.> explained the effects of co-flowing ambient fluid on the droplet formation mechanism. Further, they explained that the continuous phase flow velocity, fluid viscosity, and surface tension are the primary factors for droplet formation to happen via the dripping or jetting process. Furthermore, Garstecki et al. <cit.> analyzed the droplet breakup in co-flowing immiscible fluids and explained the importance of the geometric confinement on the break-up process. Wilkes et al. <cit.> performed computational simulations of the evolution of a pendent droplet in ambient quiescent air using two-dimensional equations. They used interface tracking with a finite element approach with an evolving mesh. Their simulation results matched with very high accuracy; capturing length evolution and microthread in a viscous drop. They were able to explain the overturning behavior of the viscous drop before the primary pinch-off. Taassob et al. <cit.> used the Volume-of-Fluid (VOF) approach with the finite volume method to simulate droplet break-up in co-flowing fluids. They reported that the continuous phase flow rate has a significant impact on the primary droplet sizes. Sauret and Shum used the level-set method with finite elements to investigate jet break-up in microfluidic co-flowing devices. They presented that introducing initial perturbations to the dispersed phase velocity can lead to faster jet break-up and more control over the droplet sizes. Dewandre et al. <cit.> proposed a non-embedded co-flow-focusing nozzle design that can generate droplets with a wide range of radii without needing any surfactant or coating. They also provided the numerical simulations using Arbitrary Lagrangian-Eulerian (ALE) method. They reported that the droplet size can be controlled entirely by the continuous phase flow velocity for a given dispersed phase fluid in a given geometric set-up. The previous work on numerical investigation of shear-induced droplets was done using a two-dimensional domain to account for both fluids. In contrast, the droplet formation in the quiescent background was studied with the one-dimensional approach and the assumption of rotational symmetry. The observation of rotational symmetry can also be true in the co-flowing fluids. However, the attempt to devise a one-dimensional mathematical model to provide the shear force effects of the continuous phase on the droplet formation process is yet to be made. In this paper, we endeavor to develop a solution approach to derive a one-dimensional model that may be used to simulate droplet formation in co-flowing fluids. The purpose of this paper is to propose a mathematical model to include the shear force effect of the continuous phase flow on the interface such that the model equations are simplified. We explore the mathematical understanding of the continuous phase flow near the interface and the contribution to the interface evolution. This mathematical model is derived in section <ref> with the explanation of the numerical procedure to solve the model equations. The model introduces a parameter that is governed by the shear layer thickness, which can be difficult to foresee. Therefore, we perform cross-validation to define this parameter and validate this one-parameter model with the experiments, which is clarified in section <ref>. We also explore droplet evolution for different continuous phase velocities in the dripping regime. § MODELING A one-dimensional approach for droplet formation in a quiescent background was proposed by Jens Eggers and Todd Dupont in <cit.>. Using their approach in a co-flowing fluid configuration, we start with the Navier-Stoked equations in cylindrical coordinates. A generic sketch of the flow geometry is shown in Fig. (<ref>). Rotational symmetry is assumed; therefore, the interface can be defined in the r-z plane. The continuity and momentum equations in cylindrical coordinates are given below, with u the velocity of the droplet fluid, p its pressure, and ν the kinematic viscosity. ∂ u_r/∂ r + ∂ u_z/∂ z + u_r/r = 0 ∂ u_r/∂ t + u_r ∂ u_r/∂ r + u_z ∂ u_r/∂ z + 1/ρ∂ p/∂ r - ν( ∂^2 u_r/∂ r^2 + ∂^2 u_r/∂ z^2 + 1/r∂ u_r/∂ r - u_r/r^2) = 0 ∂ u_z/∂ t + u_r ∂ u_z/∂ r + u_z ∂ u_z/∂ z + 1/ρ∂ p/∂ z - ν( ∂^2 u_z/∂ r^2 + ∂^2 u_z/∂ z^2 + 1/r∂ u_z/∂ r) - g = 0 The radius r is bounded by h(z,t), which is the advecting boundary of the droplet. Since the interface is moving with the flow velocity, it can be modeled using a front-tracking method. The model equation is given by Eq. (<ref>). ∂ h/∂ t + u_z ∂ h/∂ z = u_r |_r=h Figure (<ref>) shows a schematic of a flow configuration where two fluids are flowing in the same direction. In this co-flowing configuration, The inner fluid is considered to be the dispersed phase fluid, and a superscript `d' is used to define its variables. The outer fluid is called the continuous phase fluid and the variables are defined using a superscript `c'. In the case of a co-flow type environment, the velocity profile for the continuous phase flow can be derived by assuming a laminar and incompressible flow. In this section, we first derive the equation for the velocity profile. Then, using the asymptotic expansion approach, the governing equations are derived. §.§ Dispersed phase flow We define the field variables of the dispersed phase flow using asymptotic expansion. We use the fact that the radius of the fluid column is much smaller than the length. Moreover, the radial contraction is faster than the elongation. Therefore we expand the field variables, namely axial velocity (u_z^d) and pressure (p^d) using an asymptotic expansion in r. Because of the symmetry, we get u_z^d = u_0^d + u_2^d r^2 + ... p^d = p_0^d + p_2^d r^2 + ... and using u_z^d in continuity equation, we get u_r^d. u_r^d = - ∂ u_0^d/∂ zr/2 - ∂ u_2^d/∂ zr^3/4 - ... We use Eq. (<ref> - <ref>) to simplify Eq. (<ref>) for lowest order in r. ∂ u_0^d/∂ t + u_0^d ∂ u_0^d/∂ z + 1/ρ∂ p_0^d/∂ z - ν^d ( 4u_2^d + ∂^2 u_0^d/∂ z^2) - g = 0 §.§ Continuous phase flow The important aspect of modeling the shearing effects of the continuous phase flow is the description of the velocity profile. We start with the Navier-Stokes equations using cylindrical coordinates given by Eq. (<ref>-<ref>) and apply them in the pipe-flow scenario. The radial velocity is assumed to be zero and the axial velocity is assumed to be a function of radius only. Therefore, the momentum equations reduce to ∂ p^c/∂ r = 0 1/r∂/∂ r( r ∂ u_z^c/∂ r) = 1/μ^c∂ p^c/∂ z which explains that the pressure does not change along the radial direction. Assuming the constant pressure gradient in the axial direction, we can get the velocity profile as given below. u^c_z = r^2/4 μ^cdp/dz + C_1 ln(r) + C_2 The constants C_1 and C_2 can be found using the flow conditions of the continuous phase. At the interface, the velocity of the dispersed and continuous phase flow should be equal. The velocity away from the interface should be the same as a Poiseuille flow velocity profile. The distance away from the interface where this condition is satisfied depends on the thickness of the boundary layer in the continuous phase fluid as shown in Fig. (<ref>). r = h u^c_z = u^d_z r = Ch u^c_z = 1/4 μ^cdp/dz( C^2 h^2 - R^2 ) Here, the parameter C must be greater than 1 and R is the radius of the capillary pipe. The parameter C defines the thickness of the shear layer in the continuous phase flow, which is precisely (C-1)h. Intuitively, the thickness is some x% of the interface distance h from the axis of symmetry, where x is 𝒪(1). Therefore, the parameter C must be greater than 1. The thickness of this shear layer defined how much force is experienced by the dispersed phase droplet. Using the above conditions we can get two equations for our two constants. Using the asymptotic expansion for u^d_z from equation (<ref>), we get h^2/4μ^cdp/dz + C_1 ln(h) + C_2 = u^d_0 C_1 ln(Ch) + C_2 = - R^2/4μ^cdp/dz We subtract equation (<ref>) from equation (<ref>) to get C_1. Then we substitute C_1 in equation (<ref>) to get C_2. C_1 = ( h^2 - R^2 )/4 μ^c ln(C)dp/dz - u^d_0/ln(C) C_2 = u^d_0 + u^d_0 ln(h)/ln(C) - h^2/4 μ^cdp/dz - ( h^2 - R^2 ) ln(h)/4 μ^c ln(C)dp/dz Substituting these constants in equation (<ref>) yields the equation for the continuous phase velocity. u^c_z = u^d_0 + u^d_0 ln(h/r)/ln(C) + (r^2 - h^2 )/4μ^cdp/dz - ( h^2 - R^2 )/4 μ^cdp/dzln(h/r)/ln(C) The axial velocity profile given in the equation (<ref>) is the proposed velocity profile in the region close to the interface. The radial velocity of the continuous phase is assumed to be the same as the diffuse phase radial velocity at the interface to satisfy the kinematic condition. §.§ Force balance The force balance can be performed at the interface using the proposed velocity profile. The stresses from the dispersed and continuous phase flow contribute to the interface dynamics in both normal and tangential directions. The total normal forces are balanced by the surface tension force and the total tangential forces are zero. 𝐧̂σ^d 𝐧̂ + 𝐧̂σ^c 𝐧̂ = - γ𝒦 𝐧̂σ^d 𝐭̂ + 𝐧̂σ^c 𝐭̂ = 0 where σ^c and σ^d represent the stress tensor for the continuous phase and diffuse phase respectively. The normal and tangent vectors are defined by 𝐭̂ = 1/√(1 + (∂ h/∂ z)^2)[ ∂ h/∂ z; 1 ] 𝐧̂ = 1/√(1 + (∂ h/∂ z)^2)[ 1; -∂ h/∂ z ] The surface tension coefficient is given by γ and 𝒦 is the curvature, which is defined by 𝒦 = [ 1/h (1 + ∂ h/∂ z^2 )^1/2 - ∂^2 h /∂ z^2/ (1 + ∂ h/∂ z^2 )^3/2] The stresses in the normal and tangential directions are given by 𝐧̂σ𝐧̂ = - p + 2μ/[1+(∂ h/∂ z)^2][ ∂ u_r/∂ r + ∂ u_z/∂ z(∂ h/∂ z)^2 . . - (∂ u_z/∂ r + ∂ u_r/∂ z) ∂ h/∂ z] 𝐧̂σ𝐭̂ = μ^d/[1+(∂ h/∂ z)^2][ 2 ∂ u_r^d/∂ r∂ h/∂ z + (∂ u_z^d/∂ r∂ u_r^d/∂ z) . . + (1 - (∂ h/∂ z)^2) - 2∂ u_z^d/∂ z∂ h/∂ z] We use Eqs. (<ref>) and (<ref>) in the force balance given by Eqs. (<ref>) and (<ref>). The force balance can be simplified using the proposed axial velocity profile given in the equation (<ref>). The radial velocity of the continuous phase is the same as the interface velocity in the vicinity of the interface. Therefore, we can use equation (<ref>) for the radial velocity. The expansion of the force balance provides a set of equations to derive the governing equations. In the case of the tangential force, we get 𝒪(h^-1) and 𝒪(h) terms, which provide us with two equations. u_0 μ^c/h ln(C) + R^2/4 h ln(C)dp^c/dz = 0 4 u_2 = 6/h∂ u_0/∂ z∂ h/∂ z( 1 + μ^c/μ^d) + ∂^2 u_0/∂ z^2( 1 + μ^c/μ^d) - 1/μ^ddp^c/dz - 1/2μ^d ln(C)dp^c/dz Similarly, we collect 𝒪(1) terms from the normal force balance and use the equation (<ref>) for simplification. - p_0 - μ^d ∂ u_0/∂ z = - p^c + μ^c ∂ u_0/∂ z - γ𝒦 + 2 ∂ h/∂ z0( u_0 μ^c/h ln(C) + R^2/4 h ln(C)dp^c/dz) ∴ p_0 = p^c - μ^d ∂ u_0/∂ z( 1 + μ^c/μ^d) + γ𝒦 We can use the Eq. (<ref>) and (<ref>) to substitute corresponding terms in the equation (<ref>). We drop the subscripts and superscripts for dispersed phase velocity, u_0^d, for the simplistic appearance of the equations. Also, we add the buoyancy force as an external force to the system, which yields the following momentum equation. ∂ u/∂ t + u ∂ u/∂ z + γ/ρ∂𝒦/∂ z - 6 ν^d/h∂ u/∂ z∂ h/∂ z( 1 + μ^c/μ^d) - 3 ν^d ∂^2 u/∂ z^2( 1 + 2/3μ^c/μ^d) + 2/ρ^ddp^c/dz + 1/2 ρ^d ln(C)dp^c/dz - ( 1 - ρ^c/ρ^d) g = 0 where, 𝒦 is the curvature defined by equation (<ref>). The interface equation given by Eq. (<ref>) can be simplified using the asymptotic expansion of the radial and axial velocities given by Eqs. (<ref>) and (<ref>). This simplification yields the following equation for interface tracking. ∂ h/∂ t + u ∂ h/∂ z + h/2∂ u/∂ z = 0 §.§ Mixed finite element approach The Eqs. (<ref>) and (<ref>) are solved to simulate the droplet formation of an inner fluid in a two-fluid system with a given outer flow velocity by Eq. (<ref>). These are solved for u and h using a finite element discretization. However, the highest order derivative is of the third order, which is problematic for our C^0 continuous element scheme. The approximation for this term will be discontinuous across element interfaces. Inspired by Ambravaneswaran et al. <cit.>, we choose a mixed-element formulation, in which we explicitly discretize the axial derivative of the radius h with a new field variable s. The equation for this new variable is given by s - ∂ h/∂ z = 0 The curvature term in equation (<ref>) is now augmented by the new field variable s such that 𝒦 = [ 1/h (1 + s^2 )^1/2 - ∂ s/∂ z/ (1 + s^2 )^3/2] We derive the weak form using this mixed-form variable in the governing equations. Performing the integration by parts, the mixed finite element formulation is given by ∫_Ω q [ ∂ u/∂ t + u ∂ u/∂ z -6ν/h∂ h/∂ z∂ u/∂ z( 1 + μ^c/μ^d) . + γ/ρ{-s ∂ s/∂ z/h (1 + s^2 )^3/2 - s/h^2 (1 + s^2 )^1/2} . + 2/ρ^ddp^c/dz + 1/2 ρ^d ln(C)dp^c/dz - ( 1 - ρ^c/ρ^d)g ] + ∫_Ω∇ q [3 ν( 1 + 2/3μ^c/μ^d) ∂ u/∂ z + γ/ρ∂ s/∂ z/ (1 + s^2 )^3/2] d Ω - ∫_Γ q [3 ν( 1 + 2/3μ^c/μ^d) ∂ u/∂ z + γ/ρ∂ s/∂ z/ (1 + s^2 )^3/2] d Γ = 0 ∫_Ω v [ ∂ h/∂ t + u ∂ h/∂ z + h/2∂ u/∂ z ]d Ω = 0 ∫_Ω w [ s - ∂ h/∂ z ] d Ω = 0 Here, q, v, and w are test functions, and the mean curvature is defined by equation (<ref>). The discrete system given by Eqs. (<ref> - <ref>) is solved using Galerkin finite elements using a third order polynomial approximation for u and h and second order for s. §.§ Stabilization The Galerkin formulation presents a limitation on the stability of the convection-dominated problems. The droplet formation problem is inherently convective in nature. Therefore, in addition to the Galerkin approximation, there is a need to have stabilization in the formulation. The discrete form of the Galerkin formulation for the convection-dominated problems neglects a truncation error that is dissipating <cit.>. This neglected dissipation term causes the Galerkin approximation to produce a solution with negative dissipation. To compensate for this negative dissipation, the artificial diffusion term is added in the formulation using Streamline Upwinding (SU) scheme. The convective term in the strong form is ( u - 6 ν^d/h∂ h/∂ z)^u_conv ∂ u/∂ z The artificial diffusion term that corresponds to this convective operator is given by βu_conv Δ z /2^ν̅ ∂^2 u/∂ z^2 where, u_conv is convective velocity, Δ z is the element size, and β is the parameter that governs the amplitude of the added diffusion. The value of β is chosen between 0 and 1. β = 1 corresponds to the full upwind differencing scheme and β = 0 corresponds to zero added diffusion. The use of a fully upwinding scheme is discouraged <cit.> due to its excessively dissipative results. Therefore, β = 0.5 is used for the simulations in this study. This stabilization was only enabled for low-viscosity fluids, like paraffin wax, water, etc. High-viscosity fluids like glycerol can be handled without any stabilization. Another approach for a stable formulation is to use Steamline Upwiding Petrov-Galerkin (SUPG) method <cit.>. In SUPG formulation, the strong form of the equations is regularized using a similar parameter as ν̅. However, the strong form equation has a third order derivative of h from the curvature term. Using the SUPG stabilization scheme defeats the purpose of using the mixed finite element formulation. §.§ Initial and boundary conditions Initially, the velocity is zero and the curvature profile is a hemisphere that minimizes surface energy. The inlet radius h_0 is fixed depending on the nozzle radius and the inflow velocity u_0 is constant. The radius at the tip of the droplet, at length L(t), is zero. The set of Eqs. (<ref>)-(<ref>) are then solved using a continuous Galerkin formulation subject to the following constraints. Initial conditions: h = √(h_0^2 - z^2) s = -z/√(h_0^2 - z^2) for (0≤ z < L_0), s|_L_0 = - C u = 0 where C is a large negative number. In our implementation, we use -10. However, the code was tested with larger values and the results were unchanged. Boundary conditions: z = 0 h = h_0 u = u_0 z = L(t) h = 0 u = dL/dt The length of the drop L(t) can be calculated as a part of the solution as explained by Ambravaneswaran et al. <cit.> by calculating the volume of the drop, which can then be used to calculate the velocity at the tip. However, this results in a dense row in the Jacobian, so we instead produce L(t) by self-consistent iteration which is explained in our previous work <cit.>. The non-dimensional numbers are used for the analysis of the results. A total of five dimensionless quantities are used for the analysis and are given as follows: * The velocity ratio of continuous phase to dispersed phases, u^c/u^d. * The viscosity ratio of continuous phase to dispersed phases, μ^c/μ^d. * Weber number of the dispersed phase (We^d), which is a measure of the importance of the inertial forces over the surface tension forces. We^d = ρ^d u^d^2 h_0/γ * Capillary number of the continuous phase (Ca^c), which is a ratio of the viscous drag force vs surface tension force. Ca^c = μ^c u^c/γ * Ohnesorge number of the dispersed phase (Oh^d), which relates viscous forces to inertial and surface tension forces. Oh^d = μ^d/√(ρ^d γ h_0) § ANALYSIS AND RESULTS §.§ Validation The most important factor for the shear-induced droplet model is the definition of the parameter C. Hence, the cross-validation is done to define the parameter C using the findings by Hua et al. <cit.>. Then this one-parameter model is validated using the defined parameter C by comparing the droplet profile with the experimental results by Cramer et al. <cit.>. The quantities we use to define C are the velocity ratio and viscosity ratio of the continuous phase to the dispersed phase, and the surface tension by comparing the effects of these quantities on the droplet radius. Intuitively, the shear layer thickness is some x% of the interface distance from the axis of symmetry h, where x is 𝒪(1). Therefore, the parameter C must be greater than 1. The shear force exerted by the continuous phase fluid on the dispersed phase droplet is embedded in the thickness of this shear layer. As reported by Hua et al. <cit.>, the droplet radius is affected by the viscosity and velocity ratios of two fluids and the surface tension force at the common interface for a given initial conditions. Primarily, this suggests that the balance of viscous, inertial, and surface tension forces between two fluids define the coefficient for the thickness of the interface layer, which is (C-1). Since the parameter C is dimensionless, we can define the dependence of this parameter on dimensionless quantities. The dimensionless quantities used are the Capillary number of the continuous phase flow Ca^c, the Ohnesorge number of the dispersed phase flow Oh^d, and the viscosity ratio of the continuous and dispersed fluids. The functions to define C are chosen to be monotonic, tanh() and exp(), to have the inclusive lower bound. The function C = 1 + [ 0.45tanh(2.50 Oh^d - 2.0) + 0.45 ] [ 20.0exp( -45 Ca^c (μ^c/μ^d)^-0.6 + 0.045 ) ] The change in non-dimensional droplet radius for various velocity ratios is shown in Fig. (<ref>). The plot shows the comparison of the founding of Hua et al. and the simulation results using the Eq. (<ref>). The simulation results provide a very good agreement with the comparison data. The droplet radius for increasing velocity ratio gets smaller. Similarly, Fig. (<ref>) describes that the non-dimensional droplet radius gets smaller as the viscosity ratio gets larger. This suggests that the droplets get smaller as the shear force from the continuous phase fluid gets larger. Also, the impact of change in surface tension force on the droplet radius is reported in Fig. (<ref>). The Weber number, as given by Eq. (<ref>), is inversely proportional to the surface tension force. Therefore, for increasing Weber number or decreasing surface tension force, the non-dimensional droplet radius decreases. This suggests that higher surface tension force creates bigger droplets due to the higher regularization of the curvature derivative. The empirical relation that defines the parameter C by Eq. (<ref>) can be used to perform simulations for different materials and flow conditions. However, the comparison with the experimental data can be useful to gain confidence in the defined correlation. Therefore, a simulation is performed using the experimental findings by Cramer et al. <cit.>. The comparison is shown in Fig. (<ref>) for 68% the mixture of κ-Carrageenan in water as the dispersed phase fluid and sunflower oil as the continuous phase fluid. The continuous phase velocity is 0.075 m/s. Other fluid properties are used as given in <cit.>. The droplet profile from the simulation is overlapped with the experimental image. The comparison shows that the one-dimensional model can correctly identify the length evolution of the droplet. However, some discrepancy is seen in the curvature profile in the region of high curvature gradient. The simulated curvature near the top and bottom regions of the neck deviates from the actual curvature. This discrepancy might be because the shear force is assumed to be valid across the entire interface with the prescribed viscosity and velocity profile. However, this difference is only seen for the high viscosity ratio of the continuous and dispersed fluids. The primary droplet profile is still predicted well, with a reasonable inaccuracy near pinch-off. §.§ Paraffin Wax droplets In this section, the simulations are performed for a liquid paraffin wax droplet in the co-flow environment where the continuous phase fluid is air. The paraffin wax properties are used from the previous work <cit.>. The simulations are done using varying continuous phase velocities. The inlet radius (h_0) is 1.6 mm and the inlet flow velocity of the dispersed phase (u^c) is 0.01 m/s. The radius of the outer capillary (R) is considered to be 3h_0 to be consistent with the validation results. The simulation cases are run only for shear force and in the absence of gravity. The continuous phase velocity range is chosen from 7 m/s to 15 m/s with an increment of 1 m/s, which kept the jet evolution in the dripping regime. The chief quantities of interest for the shear-induced paraffin wax droplet formation are the pinch-off volume of the primary droplet and the pinch-off time. Fig. (<ref>) shows the pinch-off time for increasing continuous phase velocity. The pinch-off time decreases by almost a factor of 10 from u^c = 7 m/s to u^c = 15 m/s. The continuous phase flow contributes to the forces in both normal and tangential directions on the interface. Increasing velocity increases normal and tangential forces that force the fluid column to approach the singularity while pushing it for more elongation. This reduces the overall time for radial expansion and necking, ultimately lowering the pinch-off time. This behavior is visible from the series of images in Figures (<ref> - <ref>). The radial expansion phase is clearly visible from -0.5 non-dimensional time away from pinch-off for continuous phase velocity 7, 8, and 9, which suggests that a lower velocity range for the continuous phase does not provide enough shear force to eliminate this inflation. This is why the pinch-off time is very high for these cases. For velocity 10 m/s and higher, the shearing effect increases and creates a thinner fluid column. This helps the surface tension naturally in shrinking the fluid column radius, and it takes less time to get a pinch-off. Although the pinch-off time approaches a certain lower bound that suggests the minimum time for pinch-off for the dripping regime. The droplet volume and droplet size are other quantities of interest for the analysis. Fig. (<ref>) shows the primary droplet radius for increasing continuous phase velocity. The radius of the droplet decreases for increasing the shearing effect. The thinner fluid column due to the high shearing effect creates a smaller size droplet. The images on the right in Figures (<ref> - <ref>) show the comparison of the primary droplet sizes. The volume of the pinch-off droplets also decreases because of the smaller droplet sizes. Fig. (<ref>) shows the plot for decreasing pinch-off volume for increasing velocity. This is also due to less accumulation of the fluid in the column coming from the shearing effect. Since the higher outer velocity forces a faster pinch-off, the added volume from the inlet is smaller compared to that from a lower continuous phase velocity. The pinch-off volume and pinch-off time are both decreasing with the increasing outer flow radius. However, the volume rate of pinching droplets per unit time increases because the pinch-off time decreases by a higher magnitude than the pinch-off volume. This is shown in Fig. (<ref>). For the lower range of the continuous phase velocity profile, the pinch-off volume rate increases slowly due to a higher pinch-off time. However, the increment in pinch-off volume rate is very fast after u_c = 10 m/s. For the higher droplet velocity range, the increment trend slows down due to very little change in pinch-off time and volume. § CONCLUSION The increasing importance of shear-induced droplet formation in multi-phase flow applications shows the need for efficient, high-fidelity modeling. We present a one-dimensional mathematical model that can be useful to simulate droplet formation in co-flowing fluids with a prescribed velocity for the continuous phase fluid. We propose that the continuous phase flow velocity, modeled as a Poiseuille flow, be used in the force balance on the interface, which simplifies the momentum equation. The interface is then advected with the flow. The discretization of the model equations is accomplished by a mixed finite element approach. We use PETSc, an open-source toolkit, to discretize and solve the equations. The proposed model consists of a single parameter that is defined using previous experimental and computational studies. This cross-validated definition of the parameter is then used to simulate liquid paraffin wax droplets in co-flowing airflow. The primary quantities of interest were droplet pinch-off volume and pinch-off time. Our simulation results reported decreasing pinch-off time for increasing continuous phase velocity. This happens due to the increased forces in both normal and tangential directions, which produces a more elongated structure at a faster rate. However, this decreasing trend in pinch-off time approaches a lower bound suggesting a minimum time required for pinch-off, which depends on the viscous, inertial, and surface tension forces. We noticed decreasing droplet radius with increasing shear force from the outer fluid. The high shear force creates a thinner fluid column that produces smaller size droplets. This shows that the primary droplet volume reduction is due to the cubic relation with the radius. However, the pinch-off volume per unit time has an increasing trend for increasing shear force. The findings in this paper for paraffin wax droplets are particularly useful for the prediction of fuel utilization in the hybrid rocket combustion process. The regression of the paraffin-based fuel happens partially due to the atomization process, where droplet formation in a fast-moving oxidizer gas forms the ligaments of liquid paraffin wax leading to droplet formation. We aim to apply this approach to predict the paraffin droplet sizes that can be used to estimate the regression rate due to atomization. § ACKNOWLEDGEMENT Funded by the United States Department of Energy’s (DoE) National Nuclear Security Administration (NNSA) under the Predictive Science Academic Alliance Program III (PSAAP III) at the University at Buffalo, under contract number DE-NA0003961. This work was partially supported by the National Science Foundation under Grant No. NSF SI2-SSI: 1450339.
http://arxiv.org/abs/2307.01477v2
20230704050106
The GMRT High Resolution Southern Sky Survey for pulsars and transients -- VI: Discovery of nulling, localisation and timing of PSR J1244-4708
[ "Shubham Singh", "Jayanta Roy", "Shyam Sunder", "Bhaswati Bhattacharyya", "Sanjay Kudale" ]
astro-ph.HE
[ "astro-ph.HE" ]
National Centre for Radio Astrophysics, Tata Institute of Fundamental Research, Pune 411 007, India National Centre for Radio Astrophysics, Tata Institute of Fundamental Research, Pune 411 007, India National Centre for Radio Astrophysics, Tata Institute of Fundamental Research, Pune 411 007, India National Centre for Radio Astrophysics, Tata Institute of Fundamental Research, Pune 411 007, India National Centre for Radio Astrophysics, Tata Institute of Fundamental Research, Pune 411 007, India Many pulsars in the known population exhibit nulling, which is characterised by a sudden cessation and subsequent restoration of radio emission. In this work, we present the localization, timing, and emission properties of a GHRSS discovered pulsar J1244-4708. Moreover, we find that this pulsar shows nulling with a nulling fraction close to 60%. A quasi-periodicity is also seen in the nulling from this pulsar with two timescales. We demonstrate the broadband nature of nulling in this pulsar using simultaneous observations in band-3 (300-500 MHz) and band-4 (550-750 MHz) with the uGMRT. We also present a comparison of the efficiency of various search approaches such as single pulse search, Fast Folding Algorithm (FFA) based search, and Fast Fourier Transform (FFT) based search to search for nulling pulsars. We demonstrated that the FFA search is advantageous for detecting extreme nulling pulsars, which is also confirmed with multiple epochs of observations for the nulling pulsars using the GMRT. One of the objectives of the pulsar surveys is to find new pulsars with interesting emission properties. Rotating Radio Transients (RRATs) and nulling pulsars are objects that emit a periodic yet erratic signal. The radio emission from these objects frequently ceases and then resumes. This process of stopping and restarting of the radio emission is expected to be connected with the radio emission mechanism. In this paper, we compare the efficiency of various search approaches for nulling pulsars and RRATs (single pulse search, Fast Folding Algorithm (FFA) based search, and Fast Fourier Transform (FFT) based search). We also present the localization, timing, and emission properties of a GHRSS pulsar J1244-4708. This pulsar shows nulling with a nulling fraction close to 60%. This pulsar also shows periodic nulling with two-time scales of periodicity. We demonstrate the broadband nature of nulling in this pulsar using a simultaneous observation in band-3 (300-500 MHz) and band-4 (550-750 MHz) of the uGMRT. § INTRODUCTION Since the discovery of the first pulsar <cit.>, there has been a consistent effort to understand the radio emission physics from these objects. Even after decades of rigorous work on both the theoretical and observational front, there are still many questions regarding radio emission physics that are not completely answered (<cit.>, <cit.>, and <cit.>). The stable profile of pulsars, obtained after folding thousands of single pulses, contains information about the average beam shape and geometry of the pulsar, whereas the single pulse properties like microstructures <cit.>, subpulse drifting <cit.>, mode changing <cit.>, nulling <cit.>, and intermittency <cit.> are the features that are used to understand the exact physics of pulsar radio emission. <cit.> was the first to report the nulling phenomena in pulsars, which is a cessation of normal pulsed emission for a period of time. Till now, over 200[http://www.ncra.tifr.res.in/ sushan/null/null.html] pulsars are known to show nulling. <cit.> indicated the lack of dedicated search for nulling in the current population and a careful investigation and search for nulling in known pulsar population are required to truly estimate the number of nulling pulsars. Moreover, a major fraction of the current pulsar population has not been studied for their single pulse properties due to the sensitivity limitations of radio telescopes. There are many pulsar surveys trying to increase the pulsar population by using more sensitive telescopes and new search techniques (e.g. <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>). Along with normal and millisecond pulsars, many new exciting time-domain transients have been discovered by these surveys, including ultra-long period pulsars (e.g. <cit.>), Rotating radio transients (RRATs, e.g. <cit.> and <cit.>), and many new Fast Radio Bursts (FRBs, e.g. <cit.> and <cit.>). One of the main objectives of these surveys is to discover more pulsars with interesting emission features that can help us to constrain the radio emission mechanism. Discovering new nulling pulsars and RRATs poses a significant challenge. The population of RRATs has been defined as sources of repeating signals that can be more efficiently detected in single pulses than the periodicity searches, though they have inherent periodicity <cit.>. Some nulling pulsars show very long nulls lasting for a few tens of minutes, which is larger than the integration time per pointing in many major pulsar surveys. There is also a question of a suitable search method for nulling pulsars, especially for pulsars with severe nulling. Sufficient bright pulsars can be found in a single pulse search, but a periodicity search will be needed for a fainter population of pulsars with extreme nulling. Fast Fourier Transform (FFT) based periodicity searches (e.g. , <cit.>) require a signal to be consistent in its periodicity. While the Fast Folding Algorithm (FFA, <cit.>) search method searches for periodic signals by folding the time series at trial periods and can capture any signal that appears in the folded profile. A detailed comparison of these two periodicity search methods is required to select the best-suited methodology to search for pulsars with long nulls. The nulling phenomenon can be characterized by three parameters: nulling fraction, null as well as burst lengths, and periodicity of nulling. The nulling fraction measures the fraction of observation time when there is no signal detected from the pulsar. The null and burst lengths provide a measure of the typical timescales of the nulling and continuous pulsed emission respectively. Sometimes nulling appears to be quasi-periodic (e.g. <cit.> and <cit.>) and an approximate period of the nulling can be measured. These inputs from nulling phenomena are crucial to evaluate the radio emission models that attempt to explain nulling in pulsars. There are two popular classes of models that explain the nulling in pulsars on the basis of the radio emission mechanism. One class is based on the pulsar death Valley <cit.>. Pulsars stop emitting in radio when their magnetic field strength is not enough to sustain pair production. This class of models proposes that nulling pulsars have magnetic fields close to this boundary value and these pulsars only emit when conditions are favorable otherwise they remain in a null state <cit.>. Though this model looks interesting and feasible, many nulling pulsars are located far away from the pulsar death line, thus having enough field strength to sustain pair production <cit.>. The second class of models relies on changes in the emission state <cit.> or the loss of coherence in the radio emission process <cit.>. One of the proposed reasons behind this loss of coherency is the change in the surface magnetic field configuration <cit.> due to surface current on the polar cap <cit.>. This change in field configuration can also explain the mode change (a phenomenon where the mean pulse profile abruptly switches between two or more quasi-stable states) associated with the nulling. The quasi-periodic nulling seen in many pulsars and the consistent lengths of nulls and bursts can also be explained by this model <cit.>. We are performing a low-frequency pulsar survey with the GMRT radio interferometer, called GMRT High Resolution Southern Sky (GHRSS[http://www.ncra.tifr.res.in/ bhaswati/GHRSS.html]) survey. The survey, currently operating in phase-II over a frequency range of 300-500 MHz and targeting GMRT sky within -20 degree < declination < -54 degree, has found 28 new pulsars till now (<cit.>, <cit.>, <cit.>, <cit.>, and <cit.>) including two nulling pulsars. In this paper, we present localization, timing, and discovery of nulling for PSR J1244-4708, a pulsar discovered by the GHRSS survey. We discuss the efficiency of single pulse search and periodicity search methods to find nulling pulsars in section <ref>. Localization and timing of the GHRSS pulsar J1244-4708 is presented in section <ref>. We discuss the folded profile of J1244-4708 and its frequency evolution in section <ref>. We present the various nulling properties of this pulsar in section <ref> and summarize this work in section <ref>. § DETECTABILITY OF NULLING PULSARS IN PERIODICITY SEARCH Periodicity search methods are used to detect a periodic signal present over the observing duration consisting of fainter single pulses. The most sensitive periodicity search for non-accelerated signals is the Fast Folding Algorithm (FFA) search <cit.>. In this method, the detection is done on the folded profile. Use of a periodicity search will be only useful if the folded profile S/N (signal-to-noise ratio) is larger than the single pulse S/N. If the peak S/N of a single pulse is S, then the peak S/N of the folded profile after folding N such pulses will be given by, S_fold = S √(N) Now, if we consider a nulling fraction of f_null in the time series (i.e. only (1-f_null)N pulses are having signal), then the S/N of the profile will become, S_fold=(1-f_null)S√(N) The signal will be detected in folded profile if S_fold > S_threshold, where S_threshold is the threshold for detection, S(1-f_null)√(N) > S_threshold The number of pulses N in the observation can be written as τ/P, where τ is observation duration and P is the period of the pulsar, (1-f_null)√(τ/P)>β Here β is the ratio of the detection threshold and the average single pulse S/N. τ>β^2/(1-f_null)^2P This relation tells that all nulling pulsars irrespective of the severity of their nulling can be discovered in periodicity search, given that we have a sufficiently long observation. One can also calculate the maximum nulling fraction of a pulsar for a particular period and time of observation, which can be detected better in a folded profile than in a single pulse search. Considering the typical pointing duration of around 10 minutes for the GHRSS survey and a pulsar period of 1 second, the pulsar will be better detected in folded profile if the nulling fraction is less than 96%. In the phenomena of nulling, the signal appears and disappears from pulse to pulse. Though the signal is periodic, the absence of a pulse weakens the overall signal strength and also affects the performance of search methods that are strictly based on periodicity. The FFT-based search methods are based on the periodicity of the signal and the detectability in the power spectra depends on the consistency/regularity of the periodic signal. In FFA search, the time series is folded at all possible trial periods and detection is essentially done on the folded profiles. The features that appears in the folded profile with underlying periodicity (but having lack of regular emission) can be detected by the FFA search. This fundamental difference between the two periodicity search methods warrants a detailed comparison of their performance on nulling signals. It should be noted that the superiority of FFA search over the FFT search for all non-accelerated signals is already established by <cit.> and <cit.>. In this work we are looking further for any change in the relative performance of these two search methods caused by the nulling of the pulsed emission. To simulate nulling pulsars, we injected pulses in the white noise time series with a length of 10 minutes (similar to the GHRSS observing pointing duration) at a period of 2 s. We also simulated a telescope-like noise time series with rednoise conditions similar to the GHRSS survey <cit.>. We used this noise time series to account for the rednoise condition in the GMRT time-domain data allowing a more realistic comparison. We injected a given pulse shape at a given rotation period in a time series. We used a period of 2 s and a Gaussian pulse shape having a full-width half maxima (FWHM) equivalent to 1% of the rotation period. To simulate the nulling pulsar with a given nulling fraction, we used a random number following a uniform distribution in a range [0.0,1.0]. We generated this number for each pulse cycle and if the value is larger than the nulling fraction, then we injected a pulse in that period cycle otherwise leave that portion of the noise time series unaltered. We also used another random number following a Gaussian distribution to introduce variation in the strengths of injected pulses. We varied the nulling fraction of the signal in the range zero to one. Fig. <ref> shows the relative performance of the FFA and FFT searches as a function of the nulling fraction. Though, in general the relative performance of the FFA search is better for all nulling fractions, a boost in the relative performance of the FFA search is seen in the white noise cases when the nulling fraction is very high (larger than 90%). In the presence of rednoise, we see the boost in relative performance at lower nulling fractions (as low as 50%) as well. We also use the nulling pulsar J1936-30 (now J1937-2937) reported in <cit.> and another nulling pulsar J1244-4708, whose properties are first time getting reported in this work, to compare the two search methods. These data points from real nulling pulsars are loosely following the trend of higher FFA S/N for larger nulling fractions. The ratio of FFA S/N and FFT S/N depends on the rednoise conditions in that particular observation. Most of the data points are above the trend in the presence of simulated rednoise, indicating that the rednoise conditions in those observations are more severe than the rednoise parameters we used for the simulation To summarize, FFA search is always better for any non-accelerated signal but an additional advantage is seen for pulsars with extreme nulling. This advantage gets amplified in the presence of rednoise. So, the extreme nulling pulsars give us one more reason to use FFA search to find these isolated systems. § LOCALIZATION AND TIMING OF PSR J1244-4708 The GHRSS survey in phase-II with the upgraded GMRT (uGMRT) discovered PSR J1244-4708 in the FFT search (Bhattacharyya et al. 2023 in prep.). This is a long-period pulsar with a period of 1.4114 s and a Dispersion Measure (DM) of 75.0 pc cm^-3. The GHRSS survey uses the incoherent array (IA) mode of observations, in order to increase the sky coverage per pointing. In phase-II, the IA beam of GMRT has a Half Power Beam Width (HPBW) of 64' at 400 MHz. Hence each in-beam discovery can have an uncertainty of ∼±32' in their location. Accurate localization is needed to follow up a newly discovered pulsar with the narrow and more sensitive phased array (PA) beam of the GMRT. We used the multiple phased array beamformation method described by <cit.> to localize J1244-4708. We imaged the field with the wide-band uGMRT data in which the pulsar was discovered to extract the point sources and their locations. Then, we formed PA beams using the legacy GMRT baseband data having 33 MHz bandwidth at each of these locations and check for pulsed emission in each beam. One of the point sources in this field is likely to be pulsar and the PA beam formed at that location is expected to give √(N_ant) times more S/N than the IA beam detection, where N_ant is the number of antennas used in the beam formation. We detected the pulsar with a very good S/N (∼ 4 times of the IA beam) at one of the point sources in the field of view (FOV), located at a ∼ 20' offset from the GHRSS discovery pointing centre. Fig. <ref> shows this point source with 1.3 mJy flux, detected at 10σ significance. The detection significance of the simultaneous PA and IA beam for this pulsar are shown in Fig. <ref>. The PA beam in this image exhibits much cleaner detection of the pulsar in comparison with the IA beam detection.We get very clear detection of the pulsar with clear nulls at this location, confirming the pulsating nature of this point source. This same field also contains a GHRSS millisecond pulsar J1242-4712 <cit.>. Due to the presence of this millisecond pulsar, this field was routinely observed for follow-up timing enabling us to use these data (i.e. IA beam) for the timing of PSR J1244-4708. We folded the data with and then used of <cit.> to calculate TOAs (time of arrivals) and then we used the pulsar timing software <cit.> to derive the timing model. We used the accurate location (12h44m27.2(5)s, -47d08'00(12)") obtained from the localization in the pulsar ephemeris. We fitted for position, period, and period derivative and were able to obtain phase connected timing solution for this pulsar. The timing model for this pulsar for an observation span of 577 days is given in table <ref>. The post-fit timing residuals are shown in Fig. <ref>. The derived parameters like the estimate of surface magnetic field strength (B_s = 1.2 × 10^12 G), the characteristic age (τ = 22.3 MYr), and the spin-down energy (Ė=1.4× 10^31 erg s^-1) are also listed in the table. After noticing the signature of nulling in the PA beam, we followed up this pulsar with PA beam using GMRT wideband receivers in band-3 (300-400 MHz) and band-4 (550-750 MHz) of uGMRT, with an aim of detailed study of emission features of this pulsar. The results from these studies are reported in the subsequent sections. cc 1 Timing parameters of the pulsar J1244-4708 0pt Parameter Value Pulsar Name J1244-4708 Right Ascension (J2000, h:m:s) 12:44:27.236(3) Declination (J2000, d:m:s) -47:08:01.9(1) Rotational frequency, F0 (s^-1) 0.7084996307(1) Frequency Derivative, F1 (s^-1s) -5.007(7)×10^-16 DM (pc-cm^-3) 75.0174(3) Period Epoch (MJD) 58559.79799 DM epoch (MJD) 58522 Timing span (MJD) 58533-59110 Number of TOAs 43 RMS timing residual (μ s) 194.45 Solar system ephemeris model DE405 UNITS TCB Clock correction procedure TT(TAI) Derived parameters Characteristic age 22.3 MYr Surface Magnetic field (B_s) 1.2× 10^12 G Spin down energy (Ė_̇ṙȯṫ) 1.4× 10^31 erg s^-1 § PROFILE EVOLUTION OF J1244-4708 We used band-3 (300-500 MHz) and band-4 (550-750 MHz) phased array (PA) observations of this pulsar to study the frequency evolution of its profile. This pulsar shows three clear components in the folded profile and one faint component at the trailing end of the profile. We divided the 200 MHz bandwidth of both band-3 and band-4 data into three subbands to quantify the frequency evolution of the profile. Panel (a) of Fig. <ref> shows the profile at the central frequencies of different subbands. While there are three distinct components at lower frequencies, these components start merging with increasing frequencies. The merging of these components hints that the widths of the components are increasing with increasing frequencies. The Gaussian fitting of components revealed one more component between the second and third components. So, a total of five Gaussian components can fit in the profile. We fitted five Gaussian components in the profiles obtained from the full bandwidths of band-3 and band-4 observations, in order to get good S/N in the folded profile. We find that the locations of these components and hence separation between them remain unchanged (within the binning accuracy) across the frequency range. We measured the W10 (width at 10% height of the peak) of the fitted Gaussian shapes associated with the three prominent components in the profile for both bands. We see that these widths are either increasing with increasing frequency or remain unchanged within the error bars. We plot the evolution of the W10 of the full profile along with the frequency evolution of the widths of three prominent components in panel (b) of Fig. <ref>. The radius-to-frequency mapping relation states that higher frequencies are generated at lower heights and hence profile and its components should have smaller widths at higher frequencies <cit.>. The evolution of width of both the full profile and the individual components of the pulsar J1244-4708 is opposite to what is expected from the radius-to-frequency relation, though the frequency coverage of our observation (300-750 MHz) is not enough to robustly conclude this. In fact, a significant fraction of the pulsar population deviates from the trend of smaller profile widths at higher frequency <cit.>. § NULLING PROPERTIES OF J1244-4708 We use the dedispersed time series for the nulling analysis. First, we remove the baseline variations in the time series by subtracting a running median window matched according to the width of the profile. Then, we compute the on-pulse and off-pulse detection significance using an equal number of bins from the on and off-pulse phase for each pulse. We plot the histograms of on-pulse and off-pulse detection significance and use the method described by <cit.> to calculate the nulling fraction of the pulsar. We fit a Gaussian shape in the off-pulse histogram to estimate its amplitude (A_0) and standard deviation (σ). We use this standard deviation to fit the negative part of the on-pulse energy histogram and estimate its amplitude (A_1) (see panel (a) of Fig. <ref> and <ref>). The nulling fraction is defined as A_1/A_0. The error in the nulling fraction is also calculated by using equation (3) of <cit.>. We also stack the pulses with signal and pulses with nulls separately to check if there is any faint emission in the null region. We use a threshold value of 3σ to classify between pulses with signal and pulses with nulls. We stack these classes separately and get a high significance profile of the pulsar in the cases of pulses with signal and a noise-like response in the case of nulls (see panel b of Fig. <ref> and <ref>). We also notice repeating patterns of nulling in the pulse sequences. To search and quantify any periodicity in the nulling, we prepare a sequence of ones and zeros as described by <cit.>. We put ones for pulses with signal and zeros for pulses with null in a sequence. Now, we subtract the mean of this sequence from itself to make it zero mean. Now, we take a block of 512 bins of the sequence and calculate the power spectra of this block by using the module from of python. Then we shift the block by 50 bins on the sequence and take the next block of 512 numbers and calculate its power spectra. We repeat this exercise until we reach the end of the sequence. We stack the power spectra obtained at each step to get the final periodogram. We used a band-3 uGMRT observation epoch with a duration of 1.6 hours to characterize the nulling in the 300 to 500 MHz frequency range. This observation has more than 4000 single pulses from this pulsar. Fig. <ref> shows the pulse sequence (with an averaging of 8 pulses) with clear nulls from this observation epoch along with stacked profiles of pulses with signal and nulls. The stacked profile of nulls confirms that there is no low-level emission from the pulsar in the null phase. Fig. <ref> shows the histogram of the on and off pulse significance along with the periodogram of the pulse sequence. The histogram of the off-pulse energy distribution shows long tails owing to the residual fast baseline variations. One can see clear bimodal distribution in the on-pulse energy distribution. The calculated nulling fraction is 57.1± 1.7 %. The final periodogram of the pulse sequence shows quasi-periodicity in the nulling. We see two time scales of quasi-periodicity in the periodogram, the first one peaks at around 250 pulses while the second time scale is around 64 pulses. The periodicity of nulling varies with time within the observation duration <cit.> and the width of the peaked structures in the final periodogram represents the variation in the periodicity. Though the two peaks in the final periodogram are not well separated with not enough frequency bin resolution, one can infer a rough estimate for the variations of the two timescales of periodicities considering the widths of peaked structures and the dip between the two peaks. The first periodicity timescale peaks at 250 periods and varies roughly between 130 to 500. The second periodicity timescale peaks at 70 periods and can vary between 40-100 periods. We used a uGMRT band-4 observation to investigate the nulling in the frequency range 550-750 MHz. This observation is 1.3 hours long and contains more than 3300 pulses from the pulsar. Panel (a) of Fig. <ref> shows the pulse sequence (with an averaging of 6 pulses) with clear nulls and panel (b) shows the profiles of pulses with signal and nulls from the pulsar and verifies that there is no faint emission in the null regions. The histogram of the on and off-pulse detection significance is shown in panel (a) of Fig. <ref>. The nulling fraction calculated from this histogram is 55.1± 0.8 %. Panel (b) of the Fig. <ref> shows the final periodogram of the pulse sequence with two time scales for the quasi-periodic nulling, similar to the quasi-periodicities we find in band-3 (as shown in Fig. <ref>). This time, the first timescale peaks at the first bin of the periodogram, which corresponds to 512 pulses, and dips at the second bin which is 256 pulses. The second timescale however is consistent with the periodicities seen in band-3 analysis. It peaks at 64 periods and ranges between 50-130 periods. We used seven short observations (each of ∼40 minutes) in band-3 (300-500 MHz) of this pulsar to robustly estimate the nulling fraction. These observations contain a total of 11883 pulses from this pulsar. Fig. <ref> shows the histogram of on and off-pulse energy of all these pulses. The nulling fraction estimated from this histogram is 63± 3.4%. The significant error in the estimate is due to a large error in fitting a Gaussian shape to the off-pulse energy distribution, as the distribution significantly deviates from the Gaussian shape due to the baseline variations in the time series of different epochs. Due to the short observation duration of individual epochs, we were not able to determine the quasi-periodicity of nulling in these observations. We have estimated the nulling fractions and nulling periodicities in the two frequency bands, and they are slightly different. This may hint that nulling phenomenon is different in different frequencies, but a simultaneous observation in two frequency bands is required to truly examine the broadband nature of nulling. To demonstrate the broadband nature of the nulling in this pulsar, we use a simultaneous dual-frequency observation of this pulsar. The GMRT array was split into two subarrays. The subarray-1 having 10 antennas including 8 central square antennas and two arm antennas, was configured in band-3 (300-500 MHz), while the subarray-2 with 13 antennas, including four central square and nine arm antennas, was configured in band-4 (550-750 MHz). We observed the pulsar for 1.1 hours in this dual subarray mode. We got more than 2800 pulses from the pulsar simultaneously in both band-3 and band-4 of uGMRT. Now, we check for the simultaneous appearance of these pulses with signal and nulls in both the bands. Fig. <ref> shows the pulse sequences (with an averaging of 3 pulses) from the two bands. The panel (a) shows pulses from band-3 while panel (b) represents the pulses from band-4. These two pulse sequences are very similar. The prominent bursts and nulls have roughly the same arrival time in both frequency bands. We generate the sequence of ones and zeros for these two pulse sequences, similar to that used in the periodicity analysis of nulling. We take the cross-correlation of these two sequences using the module of of python. Fig. <ref> shows the cross-correlation function of these two sequences corresponding to two frequency bands. We see a sharp peak at zero lag, which is expected when the two pulse sequences are very similar. The zoomed version of the cross-correlation function also shows the width of the peak, which is close to ∼ 30 pulses. This should correspond to the most common timescale of the burst in these observations. With this exercise, we successfully demonstrate the broadband nature of nulling in this pulsar. To summarize, the pulsar J1244-4708 shows nulling with a nulling fraction of ∼ 55% in both band-3 (300-500 MHz) and band-4 (550-750 MHz) of uGMRT. There are two timescales of quasi-periodicity of nulling in this pulsar, the first corresponding to a few hundred periods, while the second one corresponds to 40-130 periods and peaks at 60-70 periods. We also demonstrate the broadband nature of nulling using a simultaneous dual-frequency observation of this pulsar. We performed analysis for subpulse drifting using Longitude Resolved Fluctuating Spectra (LRFS)<cit.> and did not find any other periodicity except the periodicity of nulling. In our observations, we also did not find a second mode of emission. § SUMMARY Nulling pulsar emits irregular and sporadic signals with underlying periodicity. In some extreme cases like longer nulling pulsars and RRATs, The single pulse search seems to be more effective than the periodicity search. In this work, we show that any periodic signal can be better detected in a periodicity search than the single pulse search, irrespective of its nulling fraction, given that the length of the time series is sufficiently large. For example, GHRSS pointing of 10 mins duration can effectively find nulling pulsars with the nulling fraction equal to or less than 96% from periodicity search, if the period of the pulsar is 1 second or smaller. This ensures that periodicity searches are useful in the search of periodic signals with very extreme nulling. A comparison of the FFA and FFT-based search methods over a range of nulling fractions shows that the FFA has an additional advantage over the FFT search for extremely nulling signals (having nulling fraction ∼80%). We also verify this trend by using data points from two nulling pulsars from the GHRSS survey (J1937-2937 and J1244-4708). So, the FFA search can be efficiently used to discover extreme nulling pulsars like the ones found in the GHRSS survey. We report the localization and timing of a GHRSS pulsar J1244-4708 along with the frequency evolution of its profile and nulling properties. We find that the pulsar shows hints of inverse width-to-frequency relation, i.e. widths of the profile and its components are increasing with increasing frequency. The nulling fraction of this pulsar is close to 60%. The pulsar also shows quasi-periodicity in nulling with two timescales corresponding to ∼ 70 periods and a few hundred periods respectively. <cit.> describe periodic nulling as an extreme form of amplitude modulation. They show that the pulsars with periodic nulling and amplitude modulation occupy a region on the modulation periodicity versus the spin-down energy (Ė) plane that is separated from the region occupied by the pulsars showing subpulse drifting (see Fig. 3 of <cit.>). The measured values of nulling periodicities (70 periods and a few hundred periods) and the spin-down energy (∼ 10^31 erg s^-1) of the pulsar J1244-4708 puts it in the region occupied by pulsars exhibiting periodic nulling and amplitude modulation. We also demonstrate the broadband nature of nulling in this pulsar by using a simultaneous dual-frequency observation. We use the cross-correlation of the two pulse sequences to show that the pulse sequences from the two bands (band-3: 300-500 MHz and band-4: 550-750 MHz) of simultaneous dual frequency observation are very similar in order to verify the broadband nature of nulling. The fact that this pulsar shows quasi-periodicity in nulling and the nulling is seen simultaneously in both bands favours the cessation of coherent radio emission as the origin of nulling in this pulsar. This can arise either due to the cessation of pair production at the polar cap <cit.> or loss of coherence due to temporarily unfavorable surface magnetic field configuration <cit.>. Given the period derivative of this pulsar, it being located in death valley seems unlikely and the loss of coherence in radio emission is likely to be a more favored mechanism for nulling in this pulsar. § ACKNOWLEDGEMENT We acknowledge the support of the Department of Atomic Energy, Government of India, under project no.12-R&D-TFR-5.02-0700. The GMRT is run by the National Centre for Radio Astrophysics of the Tata Institute of Fundamental Research, India. We acknowledge the support of GMRT telescope operators and the GMRT staff for supporting the GHRSS survey observations. aasjournal
http://arxiv.org/abs/2307.00969v1
20230703124011
High Altitude Platform Stations: the New Network Energy Efficiency Enabler in the 6G Era
[ "Tailai Song", "David Lopez", "Michela Meo", "Nicola Piovesan", "Daniela Renga" ]
cs.NI
[ "cs.NI", "cs.SY", "eess.SY" ]
Over-The-Air Federated Learning: Status Quo, Open Challenges, and Future Directions Lina Bariah, Hikmet Sari, and Mérouane Debbah =================================================================================== The rapidly evolving communication landscape, with the advent of 6G technology, brings new challenges to the design and operation of wireless networks. One of the key concerns is the energy efficiency of the Radio Access Network (RAN), as the exponential growth in wireless traffic demands increasingly higher energy consumption. In this paper, we assess the potential of integrating a High Altitude Platform Station (HAPS) to improve the energy efficiency of a RAN, and quantify the potential energy conservation through meticulously designed simulations. We propose a quantitative framework based on real traffic patterns to estimate the energy consumption of the HAPS integrated RAN and compare it with the conventional terrestrial RAN. Our simulation results elucidate that HAPS can significantly reduce energy consumption by up to almost 30% by exploiting the unique advantages of HAPS, such as its self-sustainability, high altitude, and wide coverage. We further analyze the impact of different system parameters on performance, and provide insights for the design and optimization of future 6G networks. Our work sheds light on the potential of HAPS integrated RAN to mitigate the energy challenges in the 6G era, and contributes to the sustainable development of wireless communications. High Altitude Platform Station, energy efficiency, simulation § INTRODUCTION Radio access networks (RANs) are currently experiencing unprecedented growth due to the shift towards fifth generation (5G) technologies, which provide high data rates, reduced latency, and increased network capacity. However, this new communication era brings new challenges. In particular, network densification becomes crucial, especially in urban environments where physical, legal, and bureaucratic limitations constrain the installation of new network infrastructure. On top of that, concerns about the environmental impact continue to grow. The massive deployment of infrastructure, with its associated energy consumption, is anticipated to contribute significantly to global greenhouse gas emissions, thereby accelerating climate change <cit.>. Consequently, there is a growing interest in developing sustainable and energy-efficient communication technologies for the next generation of wireless networks. The sixth generation (6G) of wireless networks is expected to revolutionize wireless communication, by enabling the realization of futuristic applications, such as holographic telepresence, autonomous vehicles, and ubiquitous sensing <cit.>. However, to enable these applications, 6G networks will require even higher data rates, lower latency, and ultra-reliable and secure communications, which will necessitate the deployment of more base stations (BSs) and network elements, leading to increased energy consumption <cit.>. Therefore, it is critical to investigate potential energy-saving opportunities that can be harnessed in the 6G era. To this end, the integration of aerial BSs into terrestrial networks could play an essential role by providing supplementary capacity and offloading portions of mobile traffic to alleviate the burdens of on-ground BSs. Specifically, high altitude platform stations (HAPSs) are promising candidates for self-sustainable network nodes, as they can be equipped with super macro BS (SMBS) <cit.> capable of providing additional coverage and capacity to the terrestrial network, and operate in the stratosphere at an altitude of around 20 km without demanding additional energy consumption from the power grid <cit.>. Exploiting HAPS-mounted SMBS enables a space-as-a-service paradigm, to support flexible energy and resource allocation, capacity enhancement, edge computing, and data caching, as well as processing across manifold application domains <cit.>. Conventionally, HAPSs were regarded mainly as an enhancement for communication service provisioning and were limited to rural areas or catastrophic scenarios, where the terrestrial network may not be accessible. Meanwhile, only a few studies have investigated the use of HAPS for non-coverage-based applications, such as gigabit mobile communications <cit.>, IoT services <cit.>, the cooperation with UAVs <cit.>, and hybrid communication together with satellites <cit.>. Our work aims to fill a gap by identifying the potential benefits of HAPS-integrated RAN, which supports joint energy and resource allocation strategies, in network energy efficiency in the 6G era. The objective of this paper is to assess the energy efficiency of a HAPS-integrated RAN, by proposing a framework to quantitatively estimate the energy conservation in different scenarios based on real traffic collected in the city of Milan and adapted to match the traffic statistics reported by abundant 4G/5G BSs deployed in China. In particular, our work presents a comprehensive evaluation of the energy-saving potential of a HAPS-integrated RAN through simulations and numerical analysis. By leveraging the advantages of HAPS technology, we estimate significant energy savings of up to almost 30%. In general, the novelty of our contribution is twofold: i) the construction of the system model, in which we formalize the capability of the HAPSs to offload part of the terrestrial traffic and that of the network to subsequently deactivate the corresponding unloaded BSs to save energy, and ii) the parametric analysis, in which we inspect the impact of different scenario configurations on the resulting energy conservation. § PROBLEM STATEMENT We postulate that the HAPS is deployed and allocated over an urban area of 30 km^2 with 960 BSs distributed throughout the region, as shown in Figure <ref>. The hourly traffic volumes of each BS during a typical week are considered. Additionally, the HAPS-mounted SMBS is equipped with a 4× 4 multiple-input multiple-output (MIMO) radio remote unit (RRU), which can power up to 6 carriers of 20 MHz each. These carriers may be intra-band contiguous. The entire capacity offered by the SMBS is exploited to provide coverage over the entire urban area. With the objective of reducing the overall energy consumption of terrestrial networks, a subset of the terrestrial BSs can be placed in (low-consuming) sleep mode, provided that all their traffic can be offloaded to the HAPS. To assess the effectiveness of HAPS offloading, we estimate the network energy consumption, satisfying two basic constraints: i) the number of active terrestrial BSs cannot be less than a fraction l_B of all terrestrial BSs to preserve quality of service (e.g., assuming l_B=40%, we can deactivate up to ⌈ 960 · (1-40%) ⌉ = 576 terrestrial BSs, and thus, a minimum of ⌈ 960 · 40%⌉ = 384 BSs are always active), and ii) the total offloaded traffic cannot exceed the capacity of the HAPS. Denoting by C_HAPS the overall HAPS capacity, and by r_i,h the rate of terrestrial BS i during time step h, the problem can be formulated as follows: Min E_total=∑_h=1^T∑_i=1^N f_i(r_i,h)× x_i,h s.t. ∑_i=1^N x_i,h≥⌈ N · l_B ⌉, ∀ h=1,…,T, ∑_i=1^N r_i,h· (1-x_i,h) ≤ C_HAPS, ∀ h=1,…,T, x_i,h∈0,1, ∀ i=1,…,N, ∀ h=1,…,T, where T is the number of time steps during the observation period, N is the number of terrestrial BSs, E_total is the total energy consumed by terrestrial BSs, f_i(·) is the function that maps the rate of BS i at time step h to its energy consumption, and x_i,h is a binary decision variable that takes value 1 if BS i is active at time step h, and 0 if BS i is deactivated with its traffic offloaded to the HAPS, and thus, (1-x_i,h) indicates an offloaded BS. We aim to minimize E_total by selecting the most energy-consuming group of terrestrial BSs (i.e., the most appropriate series of x_i,h), offloading the corresponding traffic to the HAPS, and putting them into sleep mode. § METHODOLOGY In this section, we model two crucial terms in eq. (<ref>), namely the function that calculates the energy consumption of terrestrial BSs and the capacity of the HAPS, and we propose a simple yet effective traffic offloading algorithm. Subsequently, we implement the algorithm in a systematically devised simulation process to estimate the energy saving. §.§ System Modelling We first describe the traffic profiles that are adopted in our study. Then, we detail how the energy consumption of the terrestrial BSs and the HAPS capacity are modeled. Traffic profiles Our simulation is based on real mobile traffic data collected by an Italian telecom operator in the city of Milan in 2015. 1419 traffic traces, corresponding to as many BSs, are available, each reporting the values of traffic volume on an hourly basis for a period of two months. To perform our study in a more realistic scenario, hence taking into account the growth of traffic demand observed in the past years, the available traffic traces are scaled up, matching various aggregated metrics derived from more recent traffic traces that were collected from 960 4G/5G BSs in an urban area in China in 2020. For each of the 960 BSs, the peak and the 5^th percentile values of the hourly traffic volume, computed over a period of one month, are provided, along with the BS bandwidth capacity and the maximum cell load. To obtain updated traffic profiles for each BS in the considered scenario, the original traffic profiles from the Italian mobile operator are scaled up according to the following procedure. First, the N=1419 original traffic traces are averaged across time to construct a typical week, thus obtaining N average weekly traces. Second, for each of the M=960 recent traffic profiles collected from just as many BSs, we iterate the following operations. A set of N new traces are derived, scaling up the original N Italian traces so that each newly derived trace features the same peak and 5^th percentile traffic values as the considered m^th BS, as well as its maximum cell load. Furthermore, among the N scaled traffic traces, we select the one whose mean value is the closest to the average hourly traffic volume of the considered m^th trace. At the end of the entire procedure, we obtain M new scaled weekly traffic profiles. The shape of these newly derived traces remains similar to a subset of the original traffic traces from the Italian mobile operator, whereas their peak, 5^th percentile, and mean values are scaled to match the corresponding metrics derived from the most recent traffic profiles. This makes the traffic profiles considered for our investigation up to date, and thus more realistic. Energy consumption model We consider the energy consumption model for a 5G BS presented in <cit.>. This model is utilized as the function f_i(·) in eq. (<ref>) to calculate the energy consumption, denoted as E_BS: E_BS = E_0 + E_BB + E_Tran + E_PA + E_out, where E_0 is the baseline energy consumption in sleep mode, E_BB is the baseband processing energy consumption, E_Tran is the total energy consumed by the RF chains, E_PA is the power amplifier (PA) static energy consumption, and E_out is the energy needed for data transmission. The first four parameters are known, while the last one is proportional to the transmitted traffic volume: E_out=1/η· P_tx·Δ T ·R_BS/C_BS, in which η is the efficiency of the PA, P_tx is the maximum transmit power, Δ T is the duration of the time step, and R_BS and C_BS are the rate and capacity of the corresponding BS, respectively. Note that R_BS is equivalent to r_i,h in eq. (<ref>). To drive this energy consumption model in a realistic manner, in this paper, we use the normalized values provided in <cit.>. HAPS capacity model The HAPS capacity available to offload traffic from terrestrial BSs can be computed based on the Shannon-Hartley theorem: C = B log_2(1 + γ), where B is the bandwidth and γ is the Signal-to-Noise Ratio (SNR). The SNR can be expressed as: γ = P_rx/N_p, where P_rx denotes the received power and N_p is the noise power. Note the HAPSs and the terrestrial BSs operate in a different frequency band, and thus do not interfere. The received power P_rx between the HAPS and the user equipment (UE) is calculated as follows <cit.>: P_rx = P_tx + G_tx + G_rx - PL, where P_tx is the HAPS transmit power, G_tx and G_rx are the transmitter and receiver antenna gains, respectively, and PL represents the total path loss. Moreover, and for simplicity, let us assume that the UE benefits from the maximum transmitter antenna gain that the antenna array can provide, and thus G_tx is defined as follows <cit.>: G_tx = G_element + 10 log(n· m), where G_element is antenna gain of a single element, and n and m correspond to the number of rows and columns of antenna elements in the antenna array, respectively. Additionally, the total path loss PL depends on various components, according to the following formula <cit.>: PL = PL_b + PL_e, where PL_b is the basic path loss and PL_e is the building entry loss. Specifically, the basic path loss is modeled as: PL_b = FSPL(d,f_c) + SF + CL(α,f_c), where FSPL(d,f_c) represents the free space path loss for a separation distance d in km and frequency f_c in GHz, that is given by: FSPL(d,f_c) = 92.45 + 20log(f_c) + 20log(d), SF is the shadow fading loss, a random variable characterized by a normal distribution, i.e., SF∼ N(0, σ_SF^2), and CL(α,f_c) is the clutter loss, with α denoting to the elevation angle. Both σ_SF^2 and CL are variables that depend on elevation angles, line-of-sight/non-line-of-sight (LOS/NLOS) conditions[ The LOS probability is also a function of the elevation angle in different urban scenarios (Table 6.6.1-1 in <cit.>).], and frequency (f_c, S-band or Ka-band)[Table 6.6.2-1 in <cit.>.]. The building entry loss PL_e varies depending on the building type, the location within the building and movement in the building. The distribution of PL_e is given by a combination of two lognormal distribution , and given a probability value P —the maximum loss not exceeded—, and, from Equation (6.6-5) in <cit.>, given a probability value P we can compute the maximum loss not exceeded, which we denote by L_BEL(P). The model estimates the building entry loss based on two distinct categories of buildings: thermally efficient and traditional, with the former typically yielding a generally higher entry loss than the latter. §.§ Offloading Algorithm To solve the minimization problem in eq. (<ref>), and since the energy consumption is a function of the cell load, we devise an algorithm that attempts to minimize the hourly energy consumption of the terrestrial network through HAPS offloading by prioritizing the offloading from the terrestrial BSs handling the least amount of traffic volume, as detailed in Algorithm <ref>. In this way, we can save the non-load dependent energy consumption of more terrestrial BSs. Note that E_total denotes the overall energy consumption of the terrestrial network during the entire observation period, whereas, within each time step h, n_offload indicates the current number of offloaded (hence deactivated) terrestrial BSs, R_offload refers to the overall rate offloaded from the terrestrial BSs to the HAPS, and E_BS,h is the overall energy consumption of all terrestrial BSs in such time step h. §.§ Simulation Strategy At this stage, we obtain the necessary information to derive optimized energy savings. However, to ensure that our simulation accurately reflects reality, we need to consider three parameters: i) Elevation angle: In our case, the HAPS operates at a height of 20 km with a coverage area of 30 km^2. Therefore, the elevation angle can be assumed constant within the covered area. However, some deviation from the optimal placement directly above the coverage area may be inevitable in reality. Hence, we consider an elevation angle that may range between 60° and 90° to account for this uncertainty; ii) Building type: As discussed in Section <ref>, different building types contribute to varying levels of building entry loss. To generalize our investigation for various scenarios, we consider a variable range for the portion of traditional buildings in the urban area under study; iii) Indoor UE: UEs outside buildings do not suffer from building entry loss, and the corresponding calculation of SNR should exclude this factor, resulting in a higher value and a larger HAPS rate. Hence, we also set a range for the percentage of indoor UEs to account for this variation. Technically, a larger elevation angle, a higher proportion of traditional buildings, and fewer indoor UEs lead to a larger available HAPS rate, hence higher energy conservation. However, in order to assess the impact of these parameters, a systematic approach is needed to profile the power conservation patterns. Therefore, we propose a Monte Carlo simulation process that involves parametric analyses with a certain degree of randomness. The randomness includes: i) the probability of LOS, ii) the probability that a UE is indoors, iii) the probability that the UE is in a traditional building when the UE is indoors, and iv) the probability that the building entry loss will not exceed the corresponding value. To ensure statistical significance, we perform 1,000 simulation runs, each featuring a randomly generated value of elevation angle, probability of LOS (that depends on the elevation angle), probability of a UE being indoor, and for indoor UEs, probability of being in a traditional building. Furthermore, each simulation deploys 3000 UEs per km^2 <cit.>, each characterized by the respective random realization of the various features (LOS/NLOS, indoor/outdoor, traditional/thermally efficient buildings, probability of building entry loss not exceeded), such that the overall distribution of each feature respects the parameter settings of the current trial. The parameters adopted in the simulation are listed in Table <ref>. § EXPERIMENTAL RESULT In this section, we evaluate the contribution of HAPS offloading to energy savings, study the impact on energy conservation of various configuration parameters, and provide some insight about HAPS utilization. §.§ Energy Saving We define the energy saving as the percentage reduction in energy consumption, i.e., the ratio between the energy consumed by all terrestrial BSs under the offloading strategy and the energy consumed by all BSs when no traffic is offloaded. As reported in Figure <ref>, from the 1,000 performed simulation trials, we derive the corresponding values of energy saving, computed over the entire week (green curve), over the weekend (black dotted curve), over the weekdays (red dashed curve), and during the night (0-5 AM, blue curve). In general, energy savings experience a linear improvement as the trial configuration settings change, regardless of the time period. Specifically, significant savings of up to 29% are achieved throughout the week, and even more substantial savings of up to around 41% are obtained during the night, due to lower traffic demand and the resulting more aggressive offloading. Even in the worst-case scenario, the HAPS provides energy savings of around 17% during the week and around 32% at night. Interestingly, the energy conservation pattern for weekdays and weekends coincides with the overall performance, resulting in overlapping curves. Moreover, despite the fact that savings grow linearly in most cases, variations can be observed at the edges, especially when it comes to weekly savings. This indicates that certain extremely restricted conditions (e.g. low elevation angle, huge prevalence of energy efficient buildings, and extremely high portion of indoor users) should be prevented to avoid dramatic performance drops. Additionally, it should be noted that from 72 to 159 BSs are never activated during the week depending on the configuration setting. This opens the door for deeper sleep modes, which could be devised to allow even a lower consumption for these BSs, or a different planning of the terrestrial network might be envisioned to avoid the deployment of unused infrastructure. Another important observation is portrayed in Figure <ref>, which presents the percentage of offloaded traffic per hour, i.e., the ratio between the volume of traffic offloaded to the HAPS and the total traffic demand. Specifically, it demonstrates that only a handful of traffic is handled by the HAPS, especially during the day (3% to 15%), meaning that most of the traffic is still transmitted by terrestrial BSs. Furthermore, despite a limited fraction of traffic, ranging from 6.64% to 16.34%, being offloaded during the whole week on average, the energy savings can amount up to 29%. This is due to the important role played by the non-load dependent energy consumption. Interestingly, the amount of traffic handled by the HAPS experiences an abrupt drop during the night. This is due to the fact that the threshold for the maximum number of sleeping BSs is quickly reached by extremely under-loaded BSs, whereas the capacity of the HAPS is far from saturated, given the low total traffic demand. Overall, by deactivating a reasonable number of BSs and offloading a small portion of traffic using a limited HAPS capacity, we achieve a substantial energy saving, which further emphasizes the significance of integrating HAPS into a RAN. §.§ Impact of Parameters As mentioned in Section <ref>, we perform a parametric analysis to evaluate the influence of the elevation angle, the percentage of traditional buildings, and the portion of indoor UEs. The results are presented in Figure <ref>. As expected, all parameters show a linear effect on the performance of energy conservation: With a larger elevation angle, fewer indoor users, and more traditional buildings, we can achieve higher energy saving. In particular: i) The elevation angle has the most distinct impact on energy savings, resulting in a stratified pattern in the figure. This is due to the fact that the difference in terms of savings between the smallest and largest angles, for a given percentage of indoor UEs and traditional buildings, is approximately 6 percentage points; ii) The proportion of indoor UEs shows a less relevant effect on energy savings. When the percentage of indoor UEs is around 60%, we can achieve additional savings of approximately 3.6% with respect to the 90% of indoor UEs; iii) The percentage of traditional buildings is the least important factor, resulting in a localized effect with respect to elevation angle. This leads to a difference of only around 2.5%. Interestingly, as the fraction of indoor UEs increases at a given elevation angle, the distribution of energy savings becomes more dispersed with respect to the percentage of traditional buildings, indicating an increasingly noteworthy effect when more UEs are inside buildings. In summary, all parameters show monotonic effects with different degrees of impact. To conserve energy, the deployment of HAPSs could be advantageous when located right above areas with more people outside and a prevalence of traditional buildings, like in a tourist city with historical buildings. §.§ Quality of Service When studying power conservation enabled by the HAPS, it is important to consider, not only the amount of energy saved, but also the QoS. However, measuring QoS at the application level is not feasible in the current scenario where we only have access to hourly traffic volume data. As a result, we need to evaluate QoS from a holistic point of view. From Figure <ref>, it might be evinced that terrestrial BSs, which are theoretically more mature and reliable, still dominate the communication, and due to the fact that only a limited fraction of terrestrial traffic is offloaded to the HAPS regardless of the situations, we can reasonably conclude that the QoS in the HAPSs is equivalent to the one achieved with the terrestrial RAN only. Additionally, the overall utilization of the capacity can serve as an effective indicator. Figure <ref> illustrates the utilization of available capacity, which is the ratio of the total managed rate to the sum of the capacities of the HAPS and the active terrestrial BSs at each hour during the week in each trial. Even during busy hours, only between 30% to 50% of capacity is exploited, and utilization is even lower during the night. This represents a higher theoretical available bandwidth per traffic unit that provides a wide margin for additional capacity in case of higher-than-expected traffic demand, thus guaranteeing a better QoS. § CONCLUSION In this paper, we propose a simulation-based framework to assess the energy efficiency of the HAPS integrated RAN in the 6G era. Our simulation allows us to profile the energy conservation pattern and quantify the impact of parameters, providing valuable insights into the potential energy savings of different network configurations. The results demonstrate that the proposed solution can enable significant energy savings, with up to 29% reduction compared to traditional terrestrial networks, and indicate that the integration of HAPS introduces an additional degree of freedom to reduce consumption, leading to sustainable development in the future. Overall, our study highlights the potential of HAPS integrated RAN as a promising solution for energy-efficient wireless networks in the 6G era. We hope that our findings will stimulate further research and development in this area, and contribute to the realization of more sustainable and environmentally friendly wireless networks in the future. Future work is possible for the development of a more robust and efficient offloading strategy that can satisfy complex scenarios with additional constraints. ieeetr
http://arxiv.org/abs/2307.02388v1
20230705155523
Multi-Task Learning with Summary Statistics
[ "Parker Knight", "Rui Duan" ]
stat.ME
[ "stat.ME", "stat.ML" ]
Collision integral with momentum-dependent potentials and its impact on pion production in heavy-ion collisions Akira Ono August 1, 2023 =============================================================================================================== Multi-task learning has emerged as a powerful machine learning paradigm for integrating data from multiple sources, leveraging similarities between tasks to improve overall model performance. However, the application of multi-task learning to real-world settings is hindered by data-sharing constraints, especially in healthcare settings. To address this challenge, we propose a flexible multi-task learning framework utilizing summary statistics from various sources. Additionally, we present an adaptive parameter selection approach based on a variant of Lepski's method, allowing for data-driven tuning parameter selection when only summary statistics are available. Our systematic non-asymptotic analysis characterizes the performance of the proposed methods under various regimes of the sample complexity and overlap. We demonstrate our theoretical findings and the performance of the method through extensive simulations. This work offers a more flexible tool for training related models across various domains, with practical implications in genetic risk prediction and many other fields. § INTRODUCTION The growing availability of extensive and intricate datasets presents an opportunity to integrate data from multiple sources. Multi-task learning has emerged as a promising machine learning approach that enables the simultaneous learning of multiple related models, leveraging shared structure between tasks to enhance the performance on each task individually <cit.>. In healthcare and biomedical research, the practical application of multi-task learning is often hindered by data-sharing constraints, which stem from concerns about the ownership and privacy of individual-level data <cit.>. Patient data in these domains is typically sensitive and less likely to be publicly available or shared across study sites, limiting researchers' access to individual-level data from different domains. To overcome this limitation, researchers have increasingly integrated summary statistics into analysis pipelines as a substitute for individual-level data<cit.>. Summary statistics are straightforward, interpretable measures derived from raw data that can offer insights into data distribution, variability, and relationships among variables. Furthermore, they can be aggregated across studies to facilitate data integration and reused in various research projects. Recently, the use of summary statistics has garnered interest in healthcare and biomedical research. For example, many genetic risk prediction methods rely on summary-level statistics such as associations from Genome-wide Association Studies (GWAS), Linkage Disequilibrium estimations (LD), and minor allele frequencies (MAFs) <cit.>. These summary statistics can help predict an individual's likelihood of developing specific diseases based on their genetic profile. Inspired by a potential use case in genetic risk prediction, we propose a multi-task learning framework that enables simultaneous learning of multiple genetic risk prediction models using only publicly available summary statistics. Our proposed framework can be used in the context of predicting genetic risks for multiple traits leveraging potentially shared genetic pathways, and can also be used to develop trans-ethnic genetic risk prediction models that account for potential heterogeneity across populations, improving generalizability and real-world applicability. Beyond genetic risk prediction, the ability to learn from summary statistics offers a versatile tool for developing models across a wide range of domains, including healthcare, finance, and marketing. To summarize, the contributions of this work are threefold: First, we propose a flexible multi-task learning framework which allows training multiple models simultaneously using basic summary statistics characterizing marginal relationship between outcomes and features, which are often publicly available. We allow summary statistics corresponding to each task to be generated from distinct or potentially overlapping samples. Secondly, we conducted a systematic non-asymptotic analysis which characterizes how the performance of the proposed methods are influenced by the characteristics of summary statistics. In particular, we show that there are multiple regimes of performance depending on the sample complexity of the source datasets and their overlap. The theoretical results are supported with extensive simulations. Lastly, We propose an adaptive scheme for tuning parameter selection based on the variant of Lepski's method <cit.> given in <cit.>. This allows us to select a data-driven tuning parameter when only summary statistics are available and cross-validation is not feasible. We prove that tuning parameters chosen by this method satisfy an oracle inequality with high probability, and demonstrate the effectiveness of the method via simulations. §.§ Related work The use of summary statistics for regression modeling has been considered in the statistical genetics literature <cit.>. The method for polygenic risk prediction was introduced by <cit.>, which considered fitting a L_1 penalized linear regression with summary statistics, and its theoretical properties were studied in depth by <cit.>. In <cit.>, the authors extend these ideas to polygenic risk prediction with binary traits. The summary statistics used in these methods include the marginal associations between genetic variants and phenotypes, and statistics summarizing the covariance structures among all genetic variants oftentimes derived from a reference genotype dataset. Empirical studies have demonstrated that the efficacy of such models is significantly influenced by the choices of the GWAS summary statistics and the reference dataset <cit.>. However, there's still limited theoretical understanding regarding how the overlap of samples and the inherent heterogeneity between datasets impact the model performance. Moreover, most current approaches devise models for a single trait within a single ancestral population. Considering shared genetic architectures could potentially enhance performance by employing a multi-task learning strategy <cit.>. The authors of <cit.> take this approach, and describe a multi-task estimator for multi-ancestry pQTL analysis. However, they do not consider the setting when only summary statistics are available for each task. Another recent line of work studies the multi-task learning problem under data-sharing constraints. In <cit.>, the authors describe a federated multi-task learning linear regression model for privacy-preserving data analysis. Similarly, <cit.> presents a computational framework for multi-task learning under DataSHIELD <cit.> constraints. The formulation of these methods is conceptually similar to ours, but they do not provide theoretical guarantees for their estimators, and we consider a more flexible setting where the summary statistics can be derived from different sources. Finally, our methods are closely related to the one-shot federated learning paradigm, in which only one round of communication is permitted between the primary local research site and additional sites. <cit.> presents an algorithm for fitting logistic regression models using summary statistics from different reseach sites. The works of <cit.> extend these ideas to linear mixed effects models and generalized mixed effects models, respectively. <cit.> presents a federated algorithm for fitting the Cox proportional hazards model, and <cit.> studies federated transfer learning methods for fitting generalized linear models. Nevertheless, the summary statistics addressed in our research are frequently reported in existing studies and can be employed across various models. This is in contrast to the one-shot federated algorithm, where the summary statistics are model specific and the implementation relies on the infrastructure of a collaborative environment. § PROBLEM SETUP AND METHODS Consider the setting where we are interested in learning a total of Q tasks simultaneously. For each q∈[Q], we posit the linear model = + where ∈^n_q, ∈^n_q × p, and is mean-zero random noise. Each index q corresponds to the q_th task. The dataset = (, ) contains the individual-level observations of the outcome and features respectively for the q_th task. We consider the generic setting where the features might be collected from either overlapping or non-overlapping samples across tasks. Our estimand of interest is the matrix ^* = [β^(1), ..., β^(Q)] ∈^p × Q, where the q_th column of ^* is . Furthermore, let e_i denote the i_th standard basis vector, so that β^(q) = ^*e_q. If all the individual-level observations are available, a natural estimator of ^* is the regularized multi-task least-squares estimator = _{∑_q ∈ [Q]1/2n_q - e_q_2^2 + λ()} where is a suitable penalty, chosen to enforce similarity structure between tasks, with tuning parameter λ > 0. However, in many applications, we are less likely to observe . Rather, summary statistics which contains information of the feature-outcome and feature-feature relationships may be more likely to be made public available. Motivated by the use case in genetic risk prediction, we assume that only summary statistics and are observable, where = 1/n_q()^T are derived from {,}, which we termed as the discovery data, and = 1/ñ_q()^T is a sample covariance matrix computed from the proxy data ∈^ñ_q × p, which may or may not have overlap with . Our goal is to estimate ^* using two sets of summary statistics and . We note that is not necessarily equal to . In practice, the studies which report may not be the same as the ones reporting . Intuitively, we hope that is generated from a similar population as , but this may not hold in general. In Section <ref>, our theoretical analysis reveals how the overlap between and and their distributional shift can influence the accuracy of multi-task learning. To construct an estimator that uses only the information provided by , we notice that the least-squares loss can be written as ℒ(β) = - β_2^2 = ^T - 2⟨β, ^T⟩ + β^T^Tβ By dropping the constant term, we arrive at a loss function that can be computed using only summary-level information, namely the matrices ^T and ^T. This motivates our general strategy for constructing an estimator only using summary statistics: we substitute and where appropriate in each least-square loss function in Equation <ref> and arrive at the following optimization problem. = _{∑_q ∈ [Q]1/2()^1/2 e_q_2^2 - ⟨, e_q ⟩ + λ()} There are many possible choices of for enforcing structure similarities across tasks. For instance, the recent works of <cit.> and <cit.> study low-rank and angle-based penalties for enforcing a shared orientation among the task-specific parameters. In this work, we study two estimators obtained under the ℓ_2,1 norm penalty, denoted ._2,1, and the nuclear norm penalty, denoted ._*. These penalties are chosen for their intuitive interpretation: the ℓ_2,1 penalty is more likely to be effective if a common set of variables are active across the tasks. If the task-specific parameters tend to be 'correlated', in the sense that they lie in a low-dimensional subspace, the nuclear norm penalty is preferred. In practice, certain domain knowledge can be incorporated to determine the penalty structure, or it can be chosen in a data-driven in the existence of a validation dataset. The corresponding estimators are expressed as follows: = _{∑_q ∈ [Q]1/2()^1/2 e_q_2^2 - ⟨, e_q ⟩ + λ_2,1} = _{∑_q ∈ [Q]1/2()^1/2 e_q_2^2 - ⟨, e_q ⟩ + λ_*} The superscripts (sp) and (lr) stand for “sparse” and “low-rank” respectively. § THEORETICAL GUARANTEES Before presenting our theoretical results, we need to introduce a bit of notation. Let N denote the total size of discovery observations and proxy observation across all Q tasks. Formally, N = ∑_q =1^Q(n_q + ñ_q) We note that N may double-count individuals who are part of both the proxy data and the discovery data. Define the subset _q ⊂ [N] as the index set for the discovery data points in the q_th task; in other words i ∈_q implies X_i ∈^p is a row of . We define _q analogously for the proxy data; i ∈_q implies X_i is a row of . Let ρ̃_q = |_q ∩_q| / ñ_q denote the proportion of proxy samples which are also in the discovery dataset for the q_th task. In the results that follow, let γ_q = 1 + β^(q)_2^2(n_q/ñ_q + 1 - 2ρ̃_q) and take γ = max_qγ_q. Additionally, let ∈^p × Q be the matrix with its q_th column equal to (_1 - _2)β^(q), where _1 and _2 are the population-level covariance matrices of and respectively. The quantities γ and play important roles in our results that follow. In particular, γ is a multiplicative factor in our bounds that represents the cost of using proxy data rather than individual-level data. Similarly, will represent the cost of using a proxy dataset with a distributional shift from the discovery data. Finally, we will let n_min and ñ_min denote the smallest sample size of discovery and proxy data, respectively. All proofs are given in the supplement. §.§ Guarantees for ℓ_2,1-norm estimator In this section, we formally state our assumptions and results for the estimator. The assumptions are standard for high-dimensional regularized estimators, see <cit.> for a deeper discussion of these conditions. [Sub-gaussian design and noise] The following holds for each q∈ [Q]: The rows of are independent and identically distributed according to a sub-Gaussian distribution with covariance matrix _1 ∈^p × p. Similarly, the rows of are independent and identically distributed according to a sub-Gaussian distribution with covariance _2 ∈^p × p. The matrices _1 and _2 have bounded eigenvalues. The entries of are independent and identically distributed according to a sub-Gaussian distribution with parameter σ^2. The and are independent of one another. [Shared support] There exists a subset S^* ⊂ [p] such that 𝗌𝗎𝗉𝗉(β^(q)) = S^* for each q. For any S ⊂ [p], let _α(S) = Δ∈^p × Q : Δ_S^c_2,1≤αΔ_S_2,1 [Restricted strong convexity] There exists a constant κ > 0 and a sequence a_N → 0 as N →∞ such that the following inequality holds for each Δ∈_3(S^*) with probability at least 1 - a_N: ∑_q = 1^Q()^1/2Δ e_q_2^2 ≥1/κΔ_F^2 Under assumptions <ref>, <ref>, and <ref>, there exist constants c_1 and c_2 depending only on the σ^2 and the eigenvalues of _1 and _2 such that if n_min∧ñ_min≥ c_1^*_∞, ∞(Q + log p) and λ = O(√(γ(Q + log p)/n_min) + _2,∞), the following inequality holds with probability at least 1 - e^-log p - a_N: - ^*_F ≤ c_2(√(γ s (Q + log p)/n_min) + √(s)_2,∞) In the subsequent discussion, we take q^* = _q∈ [Q]γ_q and (n,ñ, ρ̃) = (n_q^*, ñ_q^*, ρ̃_q^*) so that the triplet (n, ñ, ρ̃) corresponds to the same sizes and overlap factor used to compute γ. There are three main quantities in this upper bound that are of novel interest: the ratio of discovery data size to proxy data size n/ñ, the proportion of overlap between the discovery and proxy data ρ̃, and the error in specifying the proxy data distribution = (Σ^(1) - Σ^(2))β. The first two of these are captured by the factor γ. Our results show that with fixed n and ñ, the larger proportion of overlap leads to better estimation accuracy. When the proxy data and discovery data are precisely the same, meaning that _q = _q for all q, we recover the minimax rate of estimation for the ℓ_2,1 penalized multi-task learning problem established by Theorem 6.1 of <cit.>. If the proxy data and the discovery data are disjoint, meaning that _q ∩_q = ∅ for all q, the error is increased relative to the minimax rate by a factor of (1 + n/ñ)β_2^2. This recovers the result of Theorem 2.1 in <cit.> up to a constant factor, assuming that = 0. The novelty of Theorem <ref> is that we are able to characterize the convergence rate of for any values of n/ñ, p̃, and . Additionally, we emphasize that the form of the γ term implies that a price is paid anytime when is not fully contained in . Indeed, if ρ̃ < 1/2 and we take ñ→∞ we still have that γ > 1 as long as the signal is nonzero. Counter-intuitively, this indicates that an oracle model which has full access to the true covariance matrix of the covariates will perform worse in terms of estimation error than an estimator which has access to individual-level data. Furthermore, if ρ̃ > 1/2, our theorem predicts that the estimator will out-perform the oracle estimator that uses the population covariance matrix. These phenomena are validated in our simulation studies in Section <ref>. §.§ Guarantees for the nuclear norm estimator Now we state our results for the low-rank proxy data estimator, when the penalty is taken to be the nuclear norm. Once again, these assumptions are standard for high-dimensional regression problems with the nuclear norm <cit.>. [Low rank] The matrix ^* has rank r << p ∧ Q. Let ^* and ^* denote the column space and row space of ^* respectively. Note that ^* and ^* each have dimension r. Let denote a dimension k ≤ p ∧ Q subspace of ^p, and let denote a dimension k ≤ p ∧ Q subspace of ^Q. Define = (, ) := Δ∈^p × Q : 𝗋𝗈𝗐(Δ) = , 𝖼𝗈𝗅(Δ) = ^⊥ = ^⊥(, ) = Δ∈^p × Q: 𝗋𝗈𝗐(Δ) ⊥, 𝖼𝗈𝗅(Δ) ⊥ Furthermore, for any subspace Ω of ^p × Q, let Δ_Ω denote the projection of Δ onto Ω. We will denote ^* = (^*, ^*). For any set as defined above, let _α() = Δ∈^p × Q : Δ_^⊥_* ≤αΔ__* [Restricted strong convexity] There exists a constant κ > 0 and a sequence b_N → 0 as N →∞ such that the following inequality holds for each Δ∈_3(^*) with probability at least 1 - b_N: ∑_q = 1^Q()^1/2Δ e_q_2^2 ≥1/κΔ_F^2 Under assumptions <ref>, <ref>, and <ref>, there exist constants c_1 and c_2 depending only on σ^2 and the eigenvalues of _1 and _2 such that if n_min∧ñ_min≥ c_1^*_∞, ∞(Q + p) and λ = O(√(γ(Q + p)/n_min) + _), the following inequality holds with probability at least 1 - e^-p - b_N: - ^*_F ≤ c_2(√(rγ(Q + p)/n_min) + √(r)_) This theorem recovers precisely the same behavior with respect to γ and as Theorem <ref>. As γ→ 1, we achieve the minimax rate of estimation for low-rank regression as derived in <cit.> as long as = 0. § TUNING PARAMETER SELECTION WITH LEPSKI'S METHOD A key challenge of applying penalized regression models to summary statistics is that model tuning based on data splitting (e.g., training and validation) is no longer an option. Model selection methods based on information criteria require knowing the log squared loss log - β_2^2, which cannot be recovered from and <cit.>. To address this, we propose to use a tuning scheme based on Lepski's method <cit.>, a classical tool of nonparametric statistics for adaptive estimation with unknown tuning parameters. The authors of <cit.> apply the ideas of Lepski to the LASSO, providing a fast algorithm for model tuning with non-asymptotic guarantees. In this section, we extend the methods in <cit.> to tune the multi-task estimators described in the present work. The results and ideas in this section apply to both and , so without loss of generality, let (_λ, ) denote a generic estimator-regularizer pair with tuning paramter λ, which may refer to either (_λ, ._2,1) or (_λ, ._*). Additionally, let ^* denote the dual of , meaning that ^*(X) = sup_Y:(Y) ≤ 1X,Y Finally, we let denote the loss function for both estimators and let ∇ denote its gradient. The intuition behind the adaptive tuning procedure is that the tuning parameter should be chosen large enough to control fluctuations in the gradient of the loss function, but not too large such that too much bias is incurred. <cit.> articulates that the performance of regression estimator with a convex penalty is contingent on the following event occurring with high probability: (λ) = ^*(∇ (^*))≤λ/2 where we use our problem's notation for continuity. The proofs of Theorem <ref> and <ref> involve showing that (λ) holds with high probability under our stated conditions. It is straightforward to prove the following proposition, which states that conditional on , the score at our generic estimator is close to the score at the true parameter ^*. Let (_λ, ) denote a generic estimator-regularizer pair. Conditional on the event (λ), there exists a constant C > 0 such that the following inequality is satisfied almost surely: ^*(∇(_λ) - ∇(^*)) ≤ C λ This proposition motivates the following definition, which we adopt from <cit.>. Let Λ = λ_1, λ_2, ..., λ_M denote a grid of potential tuning parameters ordered such that 0 < λ_1 < λ_2 < ... < λ_M < ∞. Fix δ∈ (0,1). The oracle tuning parameter λ^*_δ is defined as λ^*_δ = _λ∈Λ(λ)≥ 1 - δ The oracle tuning parameter provides the tightest bound in Proposition <ref>, but is unknowable in practice, since we do not observe ^* and hence cannot verify . The aim of our Lepski-type method is to mimic the performance of λ^*_δ in an entirely data-driven fashion. Letting Λ denote our ordered grid of potential tuning parameters, we follow <cit.> and choose the tuning parameter λ̂∈Λ that satisfies λ̂ = _λ∈Λmax_λ', λ”∈Λ, λ ' , λ”≥λ^*(∇ (_λ') - ∇ (_λ”)) ≤C̅(λ' + λ”) where C̅ is a constant chosen by the statistician. The following theorem states that λ̂ recovers the behavior of λ^*_δ with high probability, as long as C̅ is sufficiently large. Let C denote the constant in Proposition <ref>. If λ̂ is chosen as in Equation <ref> with C̅≥ C, then the following inequalities hold simultaneously with probability at least 1 - δ: * λ̂≤λ_δ^* * ^*(∇(_λ̂) - ∇(^*)) ≤ C^*λ_δ^* where C^* ≥C̅. This theorem is essentially a generalization of Theorem 3 in <cit.>, adapted to our setting. The primary advantage of this Lepski-style tuning scheme is that it can be performed using only the gradient of the loss function, which in our setting consists only of summary-level statistics. This is a marked improvement over other summary statistic-based estimators, which typically require an additional set of individual-level data for tuning. The adaptive tuning scheme does hold some disadvantages. First of all, it requires a choice of constant C̅, which should be taken to be as close to the constant in Proposition <ref> as possible. Remark 10 in <cit.> offers some guidance as to how to choose C̅ but unfortunately their analysis corresponds only to the LASSO. Deriving the exact constant in Proposition <ref> may be possible under stronger assumptions on the data-generating process (i.e. Gaussianity), and we view this as an area of future work. Furthermore, Theorem <ref> offers only a bound on ∇() - ∇ (^*). Translating this to a bound on - ^* will require strong element-wise conditions on each of the matrices , which we do not explore in the present work. Nevertheless, our simulations in Section <ref> indicate that the adaptive tuning method performs well in terms of the MSE of , suggesting that adaptive tuning is a good option for model selection when only summary statistics are available. § NUMERICAL EXPERIMENTS We validate our theory and demonstrate the effectiveness of multi-task learning in proxy data settings via extensive experiments. In each experiment, we take the proxy dataset to be well-specified; in other words, we assume that = 0. When is nonzero, this predictably leads to worse performance, which we demonstrate in the supplement. The code, further implementation details, and additional simulations are also available in the supplement. First, we consider the effect of varying proxy data size on empirical MSE per task. We generate synthetic Gaussian data with n_min = 100, p = 100, ñ_min = τ n_min for τ∈0.5, 1, 2, 5, 10, and ρ̃_q = 0 for each q. The number of tasks was fixed at 8. Furthermore, we generate a row-sparse ^* matrix with 10 nonzero rows and a rank 2 ^* for the sparse and low-rank multi-task estimators, respectively. We then fit the proxy data multi-task learning estimator and compare the MSE per task to the estimator that has access to all of the individual level data and to the estimator that uses the true covariance matrix Σ. The results of this simulation are given in Figure <ref>. We observe a performance gap between the estimators that use the true covariance matrix and the individual level estimators, as predicted by our theory in Section <ref>. The performance of the proxy data estimators increase with increasing proxy sample size, but are unable to match the performance of the individual level estimator, as expected. Next we study the effect of varying the proportion of overlapping samples between the discovery and proxy datasets. Simiarly, we generate synthetic data with n = ñ = 100, and vary ρ̃, which indicates the proportion of proxy data points that are also in the discovery dataset. With Q=8, we generate ^* in the same way as in the previous simulation. These results are given in Figure <ref>. Once again, we observe the expected performance gap between the estimators with the true covariance and the individual-level estimators. As the proportion of overlap between the proxy dataset and the discovery dataset grows, we see that the performance of the proxy data estimator converges to that of the individual level estimator. This is anticipated by Theorems <ref> and <ref>. Finally, we compare our adaptive tuning procedure to a hold-out validation procedure that uses a small amount of individual-level data each task. The hold-out validation scheme assumes that we have access to a dataset (X_tune^(q), Y_tune^(q)) for each task q ∈ [Q], and chooses λ∈Λ such that it minimizes ∑_q = 1^QY^(q)_tune - X^(q)_tune_λe_q^2_2, where _λ is computed using the proxy data set which is independent from (X_tune^(q), Y_tune^(q))_q ∈ [Q]. This hold-out tuning procedure is often used in practice, especially in statistical genetics, whenever such a dataset is available. However, when it comes to multi-task learning, obtaining validation data for all Q tasks can pose a significant challenge. Fortunately, our adaptive tuning procedure provides a compelling alternative that overcomes this obstacle. We present the results of our simulations in Figure <ref>. In these simulations, we vary the sample size of the hold-out dataset from 10 to 100. The y-axis is the average MSE per task of the estimator computed using the tuning parameter chosen by each of the two methods. Furthermore, we have pooled the hold-out data with the proxy data in computing the estimator with the adaptive validation method, to emphasize that adaptive validation is able to take full advantage of the data at hand without needing an additional set of tuning data. This adaptive method offers comparable performance to hold-out tuning, since pooling the data increases sample size as well as overlap between and for each q. The performance of adaptive tuning improves as the amount of hold-out data increases, as expected. § DISCUSSION, LIMITATIONS, AND BROADER IMPACTS We have described a flexible multi-task framework incorporating summary statistics from distinct sources with a general data-driven tuning scheme for selecting tuning parameters. Our theoretical analysis sheds light on the intrinsic price of using summary-level information from distinct sources for statistical analysis, and suggests that more overlap between the sources, less distributional shift, and larger proxy data sample sizes can alleviate this cost. Our data-driven tuning scheme allows models to be trained without sample splitting, making it more applicable to real-world settings with only summary statistics available. The limitations of our work are summarized as follows. First of all, our methods depend on a linear relationship between the covariates and the outcomes. To extend our framework to non-linear models, we may use the second-order Taylor approximation of the loss function as in <cit.>. However, the summary statistics used by such an algorithm are not found in existing literature or publicly available databases. Additionally, our theoretical results provide only upper bounds on the estimation error of the two estimators that we consider in this work. To fully characterize the cost of using summary statistics for multi-task learning, lower bounds resembling Theorem 2.2 of <cit.> are needed. We conjecture that our estimators converge at a minimax optimal rate, and we view the proof of this conjecture as an important future direction. Finally, we may also extend the framework of <cit.> to our summary statistic based setting to adjust for potential differences between tasks. Nevertheless, our results have important implications beyond high-dimensional statistical theory. The trade-off between proxy data sample size and discovery-proxy overlap may inform how polygenic risk models are built in real-world applications: Practitioners should prioritise alignment between the sources of summary statistics that they use to build these models, rather than optimizing for large sample sizes. This guidance may lead to more accurate polygenic scores, which have emerged as an important predictive tool in the field of precision medicine. We recognize that the development of polygenic risk scores, if done without care, may worsen existing health disparities <cit.>. This is a potential negative societal impact of our work. We hope that our multi-task learning framework may be used to incorporate data from diverse populations to improve generalizability and transportability of genetic risk predictions to overcome these negative impacts.
http://arxiv.org/abs/2307.00820v1
20230703080026
Butterfly factorization by algorithmic identification of rank-one blocks
[ "Léon Zheng", "Gilles Puy", "Elisa Riccietti", "Patrick Pérez", "Rémi Gribonval" ]
math.NA
[ "math.NA", "cs.NA" ]
Factorisation butterfly par identification algorithmique de blocs de rang un Lé[email protected],2 [email protected] [email protected] PatrickPé[email protected][email protected] 1Univ Lyon, EnsL, UCBL, CNRS, Inria, LIP, F-69342, LYON Cedex 07, France. 2valeo.ai, Paris, France. Plusieurs matrices associées à des transformées rapides possèdent une certaine propriété de rang faible qui se caractérise par l'existence de plusieurs partitions par blocs de la matrice, où chaque bloc est de rang faible. À condition de connaître ces partitions, il existe alors des algorithmes, dits de factorisation butterfly, qui approchent la matrice en un produit de facteurs creux, permettant ainsi une évaluation rapide de l'opérateur linéaire associé. Cet article propose une nouvelle méthode pour identifier algorithmiquement les partitions en blocs de rang faible d'une matrice admettant une factorisation butterfly, sans hypothèse analytique sur ses coefficients. Many matrices associated with fast transforms posess a certain low-rank property characterized by the existence of several block partitionings of the matrix, where each block is of low rank. Provided that these partitionings are known, there exist algorithms, called butterfly factorization algorithms, that approximate the matrix into a product of sparse factors, thus enabling a rapid evaluation of the associated linear operator. This paper proposes a new method to identify algebraically these block partitionings for a matrix admitting a butterfly factorization, without any analytical assumption on its entries. Toward a Mapping of Capability and Skill Models using Asset Administration Shells and Ontologies Luis Miguel Vieira da Silva1, Aljosha Köcher1, Milapji Singh Gill1, Marco Weiss2 and Alexander Fay1 1Institute of Automation Technology Helmut Schmidt University, Hamburg, Germany {miguel.vieira, aljosha.koecher, milapji.gill, alexander.fay}@hsu-hh.de 2Institute of Maintenance, Repair and Overhaul German Aerospace Center (DLR), Hamburg, Germany ======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= § INTRODUCTION L'évaluation rapide d'un opérateur linéaire est un enjeu clé dans de nombreux domaines comme le calcul scientifique, le traitement du signal ou l'apprentissage automatique. Dans des applications mettant en jeu un très grand nombre de paramètres, le calcul direct de la multiplication matrice-vecteur passe difficilement à l'échelle pour cause de complexité quadratique en la taille de la matrice. De nombreux travaux se sont ainsi intéressés à la construction d'algorithmes rapides pour la multiplication matricielle, en s'appuyant typiquement sur des propriétés analytiques ou algébriques des matrices apparaissant dans les problèmes étudiés. Ces algorithmes rapides sont souvent associés à une factorisation creuse de la matrice correspondante, comme c'est le cas pour la matrice de Hadamard ou de la transformée de Fourier discrète. En effet, à permutation près des lignes et des colonnes, ces matrices de taille N possèdent une factorisation butterfly, dans le sens où elle s'écrivent comme le produit de 𝒪(log N) facteurs ayant chacun 𝒪(N) coefficients non nuls, et dont les supports ont une structure particulière illustrée dans la figure <ref>. Il a été montré dans <cit.> que la classe des matrices admettant une telle factorisation est expressive, au sens où elle contient plusieurs matrices structurées utilisées en apprentissage ou en traitement du signal. Ce modèle serait donc pertinent pour chercher des factorisations creuses d'opérateurs pour lesquels un algorithme d'évaluation rapide n'est pas connu. Trouver un algorithme rapide associé à la factorisation butterfly se formalise alors comme un problème d'optimisation, où l'on minimise en norme de Frobenius ·_F l’erreur d’approximation d'une matrice 𝐀∈ℂ^N× N par un produit de facteurs butterfly X1, …, XL, à permutations près des lignes et colonnes, encodées par les matrices de permutation 𝐏, 𝐐: min_(Xℓ)_ℓ=1^L, 𝐏, 𝐐𝐀 - 𝐐^⊤X1…XL𝐏_F. Lorsque les permutations optimales 𝐏, 𝐐 sont connues, il existe un algorithme hiérarchique efficace en complexité 𝒪(N^2) pour trouver des facteurs butterfly (Xℓ)_ℓ=1^L donnant une faible erreur d'approximation, avec garanties de reconstruction dans le cas du problème sans bruit <cit.>. Mais lorsque ces permutations ne sont pas connues, le problème (<ref>) est conjecturé comme étant difficile : d'une part, si l'on énumère toutes les permutations possibles pour résoudre (<ref>), on constate numériquement que seule une petite proportion de permutations donne une faible erreur d'approximation, comme illustré en figure <ref>, ce qui montre la nécessité d'identifier les bonnes permutations afin de résoudre (<ref>) ; mais d'autre part, une recherche exhaustive de toutes les permutations n'est pas tractable, même en tenant compte de certaines équivalences de permutations vis-à-vis de (<ref>), comme discuté en section <ref>. Afin d'identifier les permutations optimales, nous nous appuyons sur le fait qu'une matrice admettant une factorisation butterfly possède une certaine propriété dite de rang faible complémentaire <cit.>, dans le sens où il existe des partitions par blocs de la matrice, où chaque bloc est de rang faible. Il suffit alors d'identifier ces partitions pour résoudre le problème (<ref>). Ceci peut se faire analytiquement lorsque les coefficients de la matrice s'expriment par un noyau régulier (𝐱, ω) ↦ K(𝐱, ω) évalué sur des paramètres {𝐱_i }_i=1^N, {ω_j }_j=1^N, par exemple pour des matrices associées à certains opérateurs intégraux <cit.> ou transformées spéciales de fonctions <cit.>. En revanche, si la matrice étudiée n'a pas de forme analytique, ou si celle-ci n'est pas accessible, la littérature ne propose pas, à notre connaissance, de méthode pour identifier ces partitions. Aussi, nous proposons une heuristique à base de partitionnement spectral alterné des lignes et des colonnes pour identifier les partitions en blocs de rang faible, sans hypothèse analytique. En termes d'applications, cette heuristique permet de vérifier algorithmiquement qu'un opérateur linéaire, pour lequel on ignore l'existence d'un algorithme d'évaluation rapide, possède une propriété de rang faible complémentaire approchée donnant lieu à une bonne approximation de la matrice associée par un produit de facteurs butterfly. La section <ref> détaille cette propriété de rang faible utilisée pour identifier les permutations optimales. La section <ref> montre numériquement la nécessité d'identifier ces permutations, ce qui motive notre méthode expliquée en section <ref>, et validée empiriquement en section <ref>. § FORMULATION DU PROBLÈME Dans le reste du papier, nous nous plaçons dans le cas des matrices carrées de taille N := 2^L pour un certain entier L ≥ 2. Pour une matrice quelconque 𝐌, son support (𝐌) est l'ensemble des indices correspondants aux coefficients non nuls de 𝐌. Notons N la matrice identité de taille N, et ⊗ le produit de Kronecker. Selon <cit.>, 𝐀∈ℂ^N × N est une matrice butterfly si elle admet une factorisation 𝐀 = X1…XL, où chaque facteur Xℓ∈ℂ^N × N, nommé facteur butterfly, satisfait la contrainte de support fixe (Xℓ) ⊆(ℓ) pour ℓ∈L := {1, …, L}, avec ℓ := 2^ℓ-1⊗ [[ 1 1; 1 1 ] ] ⊗N / 2^ℓ. L'ensemble des matrices admettant une telle factorisation est noté ℬ. Les supports ℓ sont illustrés par la figure <ref> et, par abus de notation, la contrainte de support fixe sera notée (Xℓ) ⊆ℓ par la suite. Le problème que nous souhaitons résoudre est donc (<ref>), sous la contrainte (Xℓ) ⊆ℓ pour tout ℓ∈L. Pour trouver une bonne solution de l'instance du problème où les matrices de permutations 𝐏 et 𝐐 sont fixées, on applique l'algorithme hiérarchique de <cit.>. Par la suite, on note 𝐀𝐏𝐐 := 𝐀 - 𝐐^⊤𝐗̃^(1)…𝐗̃^(L)𝐏_F l'erreur d'approximation donnée par la sortie (𝐗̃^(1), …, 𝐗̃^(L)) de l'algorithme <ref>, quand 𝐏 et 𝐐 sont fixées. En revanche, le cas difficile est celui où les permutations ne sont pas fixées : le reste de la section explique notre approche qui s'appuie sur la propriété de rang faible complémentaire. Propriété de rang faible complémentaire <cit.> Celle-ci se définit à l'aide de deux cluster tree T^X et T^Ω, qui sont des arbres binaires avec L=log_2(N) niveaux (sans compter la racine) dont chaque nœud est un sous-ensemble non vide de N, avec comme racine N au niveau 0, et où les enfants constituent une partition de leur parent en deux sous-ensembles de même cardinal <cit.>. Une matrice 𝐀 possède la propriété de rang faible complémentaire pour T^X et T^Ω si, pour chaque ℓ∈L-1, pour chaque nœud R au niveau L - ℓ de T^X et pour chaque nœud C au niveau de ℓ de T^Ω, la restriction 𝐀_R, C de 𝐀 sur les lignes et colonnes indexées par R et C est de rang faible. Sachant que les nœuds d'un même niveau de T^X et ceux de T^Ω forment respectivement une partition des indices de lignes et de colonnes, la propriété de rang faible complémentaire impose que les blocs {𝐀_R, C}_R, C des partitions décrites par les niveaux de T^X et T^Ω soient de rang faible. Il a été montré dans <cit.> que toute matrice butterfly 𝐀∈ℬ satisfait la propriété de rang faible complémentaire (de rang 1) pour une paire d'arbres (, ) spécifique, comme expliqué ci-dessous pour rappeler les concepts qui nous seront utiles par la suite. Notons p:q := p…q (1 ≤ p < q ≤ L), et définissons la ℓ-ème classe Monarch <cit.> pour ℓ∈L-1 comme étant l'ensemble ℓ := {𝐗𝐘, (𝐗) ⊆1:ℓ, (𝐘) ⊆ℓ+1:L}. On vérifie alors que ℬ⊆⋂_ℓ=1^L-1ℓ, et qu'une matrice 𝐀 appartient à ℓ si, et seulement si, 𝐀_R, C est de rang au plus 1 pour tout (R, C) ∈𝒫^(ℓ) := {ℓ_i }_i=1^N/2^ℓ×{ℓ_j }_j=1^2^ℓ, où ℓ_i := { i + (k-1) N / 2^ℓ, k ∈2^ℓ}, ℓ_j := { (j-1) N / 2^ℓ + k, k ∈N/2^ℓ}. On observe que {ℓ_i }_i=1^N/2^ℓ et {ℓ_j }_j=1^2^ℓ constituent chacun une partition de N : ainsi, lorsque 𝐀∈ℓ, ces partitions de lignes et de colonnes décrivent une partition de 𝐀 en blocs de rang 1. On définit alors et comme étant les deux arbres pour lesquels les nœuds de au niveau L - ℓ et les nœuds de au niveau ℓ sont précisément {ℓ_i }_i=1^N/2^ℓ et {ℓ_j }_j=1^2^ℓ, pour chaque ℓ∈L-1. Ainsi, 𝐀∈⋂_ℓ=1^L-1ℓ est précisément une reformulation de la propriété de rang faible complémentaire (de rang 1) pour les arbres et . Permutation d'indices Pour tout cluster tree T dont la racine est N, et pour toute permutation σ: N→N, on définit le cluster tree σ(T) obtenu en permutant selon σ les indices dans T. En particulier, on remarque que pour tout arbre T^X et T^Ω, il existe plusieurs matrices de permutation 𝐏 et 𝐐 pour lesquelles σ_𝐐() = T^X et σ_𝐏() = T^Ω, où σ_𝐏, σ_𝐐 sont les permutations associées aux matrices 𝐏, 𝐐. Ceci définit alors des classes d'équivalence de permutations de lignes [𝐐_T^X] et de colonnes [𝐏_T^Ω]. Approche pour résoudre (<ref>) Supposons que la matrice cible dans (<ref>) soit de la forme 𝐀 := 𝐐̃^⊤𝐀̃𝐏̃, où 𝐀̃∈ℬ est une matrice butterfly, et 𝐏̃, 𝐐̃ sont deux matrices de permutations arbitraires inconnues. Puisque 𝐀̃ satisfait la propriété de rang faible complémentaire pour les arbres et , la matrice 𝐐̃^⊤𝐀̃𝐏̃ la satisfait également mais pour les arbres T^X := σ_𝐐̃() et T^Ω := σ_𝐏̃(). Si l'on parvient à reconstruire T^X et T^Ω à partir de l'observation de 𝐀, alors (<ref>) peut se résoudre en choisissant une paire quelconque (𝐏, 𝐐) ∈ [𝐏_T^Ω] × [𝐐_T^X], et en appliquant l'algorithme <ref> avec ces permutations fixées. En effet, un tel choix suffit pour garantir que 𝐐𝐀𝐏^⊤ satisfasse la propriété de rang faible complémentaire pour et , i.e., 𝐐𝐀𝐏^⊤∈⋂_ℓ=1^L-1ℓ, puisque les matrices 𝐏^⊤ et 𝐐^⊤ sont associées aux permutations inverses σ^-1_𝐏 et σ_𝐐^-1, et par définition des classes d'équivalence, on a bien σ_𝐏^-1(T^Ω) = et σ^-1_𝐐(T^X) =. En conclusion, afin de résoudre (<ref>), il est suffisant d'identifier les arbres T^X et T^Ω pour lesquelles la matrice cible satisfait la propriété de rang faible complémentaire, ce qui se ramène, par définition, à identifier des partitions de 𝐀 en blocs de rang 1. § NÉCESSITÉ DE RETROUVER LES PARTITIONS Nous montrons à présent empiriquement que l'identification des arbres T^X et T^Ω est en fait nécessaire pour résoudre (<ref>) avec 𝐀 := 𝐐̃^⊤𝐀̃𝐏̃. Considérons 𝐀̃∈ℬ dont les facteurs ont des coefficients non nuls tirés selon une gaussienne centrée réduite. Puis, nous énumérons tous les arbres T^X, T^Ω possibles, calculons une solution via l'algorithme <ref> en fixant une paire arbitraire (𝐏, 𝐐) ∈ [𝐏_T^Ω] × [𝐐_T^X], et vérifions que les seuls arbres donnant une erreur 𝐀𝐏𝐐 faible sont T^X et T^Ω. Un dénombrement de tous les arbres montre en revanche que cette expérience n'est pas tractable pour une grande taille N. En effet, le nombre u_N de cluster tree pour une même racine de cardinal N satisfait la relation de récurrence u_N = 1/2N N/2 (u_N/2)^2 avec u_2 = 1, car il y a 1/2N N/2 paires d'enfants possibles pour la racine, et chaque enfant est un cluster tree dont la racine est de cardinal N/2. Ainsi, nous considérons N=8 à titre illustratif, ce qui donne u_8 = 315. Dans la figure <ref>, la recherche exhaustive sur toutes les paires d'arbres montrent que l'erreur est nulle seulement pour une seule paire, et que les autres paires échouent à la résolution de (<ref>). Ceci illustre donc empiriquement la nécessité d'identifier les bons arbres T^X et T^Ω pour résoudre (<ref>). Démontrer formellement une telle nécessité pour n'importe quelle taille de matrices pourra faire l'objet de futurs travaux. § PARTITIONNEMENT SPECTRAL ALTERNÉ Nous proposons l'algorithme <ref> à base de partitionnement spectral alterné pour identifier les arbres T^X et T^Ω pour lesquels la matrice 𝐀 satisfait la propriété de rang faible complémentaire. Pour décrire notre approche, commençons par expliquer la résolution du problème min_𝐌∈ℓ, 𝐏, 𝐐𝐀 - 𝐐^⊤𝐌𝐏_F^2 pour chaque ℓ∈L-1. Étant donné qu'une matrice 𝐌 appartient à ℓ si et seulement si le bloc 𝐌_R, C est de rang au plus 1 pour tout jeu de lignes et colonnes (R, C) ∈𝒫^(ℓ), ce problème équivaut à min_{ R_i }_i=1^N/2^ℓ, { C_j }_j=1^2^ℓ∑_i=1^N/2^ℓ∑_j=1^2^ℓmin_𝐱, 𝐲𝐀_R_i, C_j - 𝐱𝐲^*_F^2, où { R_i }_i=1^N/2^ℓ, { C_j }_j=1^2^ℓ sont des partitions de lignes et de colonnes en sous-ensembles de même cardinal, et min_𝐱, 𝐲𝐀_R, C - 𝐱𝐲^*_F^2 calcule la meilleure approximation de rang 1 de 𝐀_R, C. Le symbole * désigne la matrice adjointe. Partitionnement des lignes  Pour qu'une optimisation alternée fonctionne, il est nécessaire de pouvoir résoudre le problème <ref> lorsqu'une des deux partitions est connue. Fixons ainsi sans perte de généralité une partition de colonnes { C_j }_j=1^2^ℓ, et cherchons une partition de lignes { R_i }_i=1^N/2^ℓ qui minimise (<ref>). Pour cela, nous nous inspirons des méthodes existantes pour le problème de << subspace clustering >> <cit.>. Définissons 2^ℓ graphes {𝒢_j }_j=1^2^ℓ, où les N nœuds du graphe 𝒢_j sont les N lignes de 𝐀 restreintes aux colonnes C_j –notées 𝐀_k, C_j pour k ∈N– et les poids des arêtes de 𝒢_j sont donnés par la matrice de similarité 𝐖^(j)∈ℝ^N × N définie par 𝐖^(j)_k, l := ( | 𝐀_k, C_j^* 𝐀_l, C_j |/𝐀_k, C_j_2 𝐀_l, C_j_2)^α ∀ k, l ∈N, avec α > 0 un paramètre contrôlant le contraste entre les poids des arêtes. Intuitivement, un groupe de lignes restreintes aux colonnes C_j est inter-connecté par des poids de fortes valeurs dans le graphe 𝒢_j lorsque les lignes correspondantes sont corrélées, ce qui est le cas lorsqu'elles forment un bloc de rang 1. Inversement, deux lignes non corrélées ont un poids de faible valeur. Ainsi, en résolvant un problème de coupe minimale sur le graphe 𝒢 dont la matrice de similarité est 𝐖 := ∑_j=1^2^ℓ𝐖^(j), les groupes de nœuds obtenus doivent correspondre à un partitionnement des lignes qui minimise (<ref>). Concrètement, à partir de la matrice de similarité 𝐖, un partitionnement spectral <cit.> du graphe 𝒢 est effectué en calculant la décomposition en vecteurs propres du Laplacien non normalisé 𝐋 := 𝐃 - 𝐖, où 𝐃 est la matrice de degrés dont les coefficients diagonaux sont 𝐖 (1 … 1 )^⊤. Afin de garantir un partitionnement des lignes en groupe de même taille, l'étape de partitionnement k-moyenne sur les représentations spectrales est implémentée selon la méthode de <cit.>. Optimisation alternée  Quand la partition de colonnes n'est plus fixée, la résolution de (<ref>) suit l'algorithme <ref> de partitionnement alterné, où l'on initialise aléatoirement une partition des colonnes et, à chaque itération, un partitionnement spectral des lignes est effectuée en fixant la partition de colonnes de l'itération précédente, et vice-versa en échangeant le rôle des lignes et des colonnes. Sans garanties de réussite, l'algorithme <ref> peut nécessiter plusieurs réinitialisations pour trouver une solution. Résolution finale de (<ref>)  Étant donné la matrice cible 𝐀 := 𝐐̃^⊤𝐀̃𝐏̃, l'algorithme <ref> résout indépendamment chaque problème (<ref>) pour ℓ∈L-1 via l'algorithme <ref>. Si chaque problème (<ref>) est bien résolu, et que les partitions trouvées forment des arbres T^X et T^Ω valides dans le sens où ils satisfont les axiomes d'un cluster tree , alors 𝐀 possède la propriété de rang faible complémentaire pour T^X et T^Ω. On résout alors (<ref>) via l'algorithme (<ref>), en fixant (𝐏, 𝐐) ∈ [𝐏_T^Ω] × [𝐐_T^X]. § EXPÉRIENCES Nous évaluons les performances empiriques de notre méthode pour factoriser 𝐀 := 𝐐̃^⊤𝐀̃𝐏̃ + ϵ (𝐀̃_F / 𝐍_F) 𝐍, où 𝐏̃, 𝐐̃ sont des permutations aléatoires, 𝐍 est une matrice avec des coefficients suivant une loi gaussienne centrée réduite, ϵ≥ 0 contrôle le niveau relatif de bruit, et 𝐀̃ correspond soit à une matrice butterfly orthogonale aléatoire définie par <cit.>, soit à la matrice de la transformée de Fourier discrète (TFD). Nous appliquons l'algorithme <ref> avec {α_k }_k=1^K := {10^p }_p ∈{ -2, -1, 0, 1, 2} et M=5 sur 20 instances du problème pour ϵ∈{ 0, 0.01, 0.03, 0.1 }, et pour N ∈{ 2^L }_L ∈{ 2, …, 7 }. La taille N > 128 n'est pas considérée car l'algorithme <ref> a une complexité cubique en N: une exécution prend quelques minutes pour N=64, et une heure pour N=128. Lorsque 𝐀̃ est une matrice butterfly orthogonale aléatoire, l'algorithme <ref> a 100 % de succès sur les 20 instances du problème, pour tous les niveaux de bruit et toutes les tailles considérés, ce qui veut dire que l'algorithme <ref> répété avec suffisamment de α et de graines aléatoires permet de résoudre indépendamment chaque problème (<ref>) pour ℓ∈L-1, et que les partitions trouvées forment des arbres T^X et T^Ω valides. La figure <ref> illustre les résultats de l'algorithme <ref>. En cas de succès, l'algorithme <ref> retourne la même erreur d'approximation que 𝐀𝐏̃𝐐̃ obtenue en connaissant 𝐏̃, 𝐐̃. Dans le cas sans bruit, l'erreur relative atteint la précision machine. Dans le cas bruité, la figure <ref> montre qu'elle est de l'ordre de ϵ. Lorsque 𝐀̃ est la matrice TFD, l'algorithme <ref> a également 100 % de succès pour tous les niveaux de bruit et pour N ≤ 64. Pour N=128, la taux de succès est à 100 % pour le cas sans bruit, mais se dégrade dans le cas bruité comme illustré dans la table <ref>. La robustesse pourrait être améliorée par exemple en résolvant les problèmes (<ref>) pour ℓ∈L-1 conjointement, et non pas indépendamment. Conclusion  Nous avons proposé une heuristique pour identifier sans hypothèse analytique les partitions d'une matrice en blocs de rang faible permettant une factorisation butterfly. Lever les verrous de robustesse et de passage à l'échelle de l'heuristique permettrait à terme de chercher une factorisation butterfly pour des opérateurs utilisés en traitement de signal ou en apprentissage, comme les transformées de Fourier sur graphes <cit.> ou les couches de réseaux de neurones <cit.>. Remerciements  Ce travail a été soutenu par le projet ANR AllegroAssai ANR-19-CHIA-0009. 0.8
http://arxiv.org/abs/2307.03120v1
20230706164737
Particle production during Inflation with a non-minimally coupled spectator scalar field
[ "Zhe Yu", "Chengjie Fu", "Zong-Kuan Guo" ]
gr-qc
[ "gr-qc", "astro-ph.CO" ]
APS/123-QED [email protected] Key Laboratory for Particle Astrophysics, Institute of High Energy Physics, Chinese Academy of Sciences, 19B Yuquan Road, Beijing 100049, China School of Physical Sciences, University of Chinese Academy of Sciences, No.19A Yuquan Road, Beijing 100049, China [email protected] Department of Physics, Anhui Normal University, Wuhu, Anhui 241000, China [email protected] CAS Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, P.O. Box 2735, Beijing 100190, China School of Physical Sciences, University of Chinese Academy of Sciences, No.19A Yuquan Road, Beijing 100049, China School of Fundamental Physics and Mathematical Sciences, Hangzhou Institute for Advanced Study, University of Chinese Academy of Sciences, Hangzhou 310024, China We study the inflationary model with a spectator scalar field χ coupled to both the inflaton and Ricci scalar. The interaction between the χ field and the gravity, denoted by ξ Rχ^2, can trigger the tachyonic instability of certain modes of the χ field. As a result, the χ field perturbations are amplified and serve as a gravitational wave (GW) source. When considering the backreaction of the χ field, an upper bound on the coupling parameter ξ must be imposed to ensure that inflation does not end prematurely. In this case, we find that the inflaton's evolution experiences a sudden slowdown due to the production of χ particles, resulting in a unique oscillating structure in the power spectrum of curvature perturbations at specific scales. Moreover, the GW signal induced by the χ field is more significant than primordial GWs at around its peak scale, leading to a noticeable bump in the overall energy spectrum of GWs. Particle production during Inflation with a non-minimally coupled spectator scalar field Zong-Kuan Guo August 1, 2023 ======================================================================================== § INTRODUCTION The inflationary scenario <cit.> has become the dominant paradigm of the early Universe to address the horizon and flatness problems in the standard cosmology. During inflation, quantum vacuum fluctuations are stretched out to the super-horizon scales and become primordial perturbations <cit.>, where the scalar modes (i.e., primordial curvature perturbations) result in the observed cosmic microwave background (CMB) anisotropies and the large-scale structures (LLS). Thanks to the accurate CMB and LSS measurements, the amplitude of the power spectrum for curvature perturbations has been precisely constrained as 2.1× 10^-9 at k=0.05Mpc^-1 with a slight scale dependence <cit.>, which is consistent with the prediction of the general single-field slow-roll inflation. Moreover, an extremely important prediction of inflation is the generation of a stochastic background of primordial gravitational waves (GWs), characterized by a nearly scale-invariant power spectrum. By the observations of CMB B-mode polarization, the current bound on the tensor-to-scalar ratio r, describing the amplitude of primordial GWs, has been found to be r<0.036 at 95% confidence level for k=0.05Mpc^-1 <cit.>. Although one can obtain information about the inflationary physics by the observations of primordial perturbations, the CMB only probe a small fraction of inflation associated with the large scales (k≲ 1 Mpc^-1). However, the GW detection open a new window to observe primordial perturbations at smaller scales to shed light on the picture of the last stages of inflation. The ongoing and planned GW experiments such as pulsar timing array (NANOGrav <cit.>, SKA <cit.>), ground-based interferometers (LIGO <cit.>, Virgo <cit.>), and space-based interferometers ( LISA <cit.>, Taiji <cit.>) have the potential to detect the stochastic GW background in the range of frequencies between the nHz and kHz range, covering scales around 10^6-10^18Mpc^-1. However, current bounds from CMB observations predict primordial GWs, originating from quantum vacuum fluctuations within the general single-field slow-roll framework, to be out of reach for these experiments due to the nearly scale-invariance of the GW spectrum, whose amplitude is suppressed at the small scales. Nevertheless, the possibility of detecting the GW background from inflation through these experiments cannot be dismissed, especially if some specific inflationary models produced a GW signal with a large amplitude and a significant deviation from scale-invariance <cit.>. During inflation, GWs can be generated through a classical mechanism in which the equation of motion for GWs incorporates a source term. Such a term emerges if additional fields, present during inflation, have interactions with the inflaton resulting in strong particle production, such as the gauge particle production through the coupling of the pseudo-scalar inflaton to gauge fields <cit.>. In this paper, we focus on the situation of the scalar particle production during inflation, which has been widely studied in the literatures <cit.>. It is a simple way to achieve such a situation by introducing an extra scalar field χ that interacts with the inflaton ϕ via the coupling <cit.> g^2/2(ϕ-ϕ_0)^2 χ^2, where ϕ_0 is a constant having the dimension of mass, and g denotes a dimensionless coupling constant. The effective mass of the χ field, m_χ=g|ϕ-ϕ_0|, is related to the value of the inflaton ϕ, and vanishes exactly when ϕ=ϕ_0. For a short period when the inflaton crosses around ϕ_0, the mass m_χ changes non-adiabatically such that the specific momentum modes of the χ field are excited and act as a classical source of GWs. However, it has been pointed out that the production of quanta of an extra scalar field interacting with the inflaton as described by Eq. (<ref>) induces an insignificant GW signal compared with primordial GWs <cit.>. When taking into account that the χ field becomes massless during inflation due to its coupling with another scalar field (other than the inflaton), the resulting spectrum for induced GWs does increase, however it remains significantly smaller compared to the spectrum of primordial GWs and does not dominate in terms of overall contribution <cit.>. In this paper, we explore a scenario where the χ field is coupled to the Ricci scalar R through the ξ R χ^2 term (with ξ being dimensionless coupling parameter) in addition to the interaction term (<ref>). In this scenario, during a brief period when the inflaton traverses through ϕ_0, the effective mass square of the χ field become negative as a result of the non-minimal coupling of the χ field to gravity. Consequently, the χ field undergoes a tachyonic instability leading to an irruptive production of χ particles. This scenario proves to be more efficient in generating GWs compared to the case with minimal coupling. Our paper is organized as follows. In section <ref>, we start by introducing the inflationary model, where a spectator scalar field χ is coupled to both the inflaton and the Ricci scalar. We then investigate the amplification of the χ field due to the tachyonic instability, and the production of GWs induced by the χ field. In section <ref>, we turn our attention to the phenomenology of this inflationary scenario in light of the backreaction of the amplified χ field on the background and perturbation evolution. Finally, the conclusion and discussion are given in section <ref>. Throughout the paper, we adopt c=ħ=1 and the reduced Planck mass defined as M_p=1/√(8π G). § MODEL Our model incorporates an extra scalar field that is coupled to both the inflaton and Ricci scalar, within the framework of general single-field slow-roll inflation. This is specified by the following action, S=∫ d^4 x √(-g)[M_p^2/2R-1/2∇^μϕ∇_μϕ-V(ϕ)-1/2∇^μχ∇_μχ-g^2/2(ϕ-ϕ_0)^2 χ^2+1/2ξ R χ^2], where the ϕ field serves as the inflaton, and the χ field is a spectator scalar field. Note that in this study, we do not consider an initial homogeneous background for χ field. Therefore, we treat the χ field as a quantum field. In this section, our primary focus lies on investigating the efficiency of the GW generation resulting from χ-particle production. Hence, we disregard the inflaton perturbations, the metric perturbations, and the backreaction caused by χ particles. The equation of motion for the χ field is given by, χ̈+3Hχ̇-1/a^2∇^2χ+[g^2(ϕ-ϕ_0)^2-ξ R]χ=0, where R=6(Ḣ + 2H^2). Then, the quantum field χ can be decomposed as χ=1/(2π)^3/2∫ d^3k[χ_kâ_k + χ^∗_-kâ ^†_-k]e^i k·x, where the creation and annihilation operators â^†_k and â_k satisfy the canonical commutation relation [â_k,â^†_k^']=δ(k-k^'). The mode functions χ_k obey the following equation of motion, χ̈_k+3H χ̇_k+ω_k^2 χ_k =0, with ω_k^2=k^2/a^2+g^2(ϕ-ϕ_0)^2-6ξ(Ḣ + 2H^2), which reduces to the following form, ω_k^2 ≃k^2/a^2+g^2(ϕ-ϕ_0)^2-12ξ H^2, under the slow-roll approximation during inflation. In a specific range of parameter values with ξ>0, we can observe the following intriguing phenomena: as the inflaton rolls down the potential up to around value of ϕ_0, the contribution of the g^2-term in Eq. (<ref>) becomes negligible, which gives rise to a scenario where ω_k^2<0 for certain cases. Consequently, the modes with k/(aH)<√(12ξ) will experience a tachyonic instability until the inflaton moves far away from ϕ_0, resulting in an amplification of the corresponding mode functions. Meanwhile, these modes serve as a source term in the GW equation of motion. Next, we will numerically investigate the amplification of the vacuum fluctuations of χ and the generation of induced GWs. To illustrate the results for this model, we adopt the Starobinsky potential, i.e., V(ϕ)=M^2M_p^2 [1-exp(-√(2/3) ϕ/M_p)]^2 with M=9.53 × 10^-6M_p, as a typical representative. Within the spatially flat FRW metric, the inflationary dynamics is determined by the following field equations, -2Ḣ - 3H^2= M_p^2 ( 1/2ϕ̇^2 - V ), ϕ̈+3 H ϕ̇+ dV/dϕ=0, Then, we numerically solve the coupled set of background equations (<ref>) and (<ref>) and perturbation equation (<ref>) using the initial Bunch-Davies vacuum state described by χ_k = 1/a√(2k), χ̇_k=-ik/aχ_k-Hχ_k. In the top panel of Fig. <ref>, we plot the evolution of the power spectrum of the χ field from N_1=29.995 to N_2=27.412 for the parameter set of ϕ_0=4.57M_p, g = 100 M/M_p and ξ=6. Here, N_1 represents the e-folding number at the onset of the tachyonic instability, while N_2 corresponds to the e-folding number at which the tachyonic instability ends basically and the power spectrum reaches its maximum. It is easy to observe that during this phase, the modes with k≲ 10k_0, which is well agreement with our foregoing estimation k/(a_0H_0)<√(12ξ), are significantly amplified due to the tachyonic instability. The intensity of the instability increases as k decreases. However, it is noteworthy that each mode shares same intensity of instability when k<a_0H_0 ≪√(12ξ)a_0H_0, where the ξ-term in Eq. (<ref>) dominates over other terms. As a result, the power spectrum 𝒫_χ features a peak at around k_0 and has a k^3 slope in the infrared region. For modes with k > 10 k_0, since ω_k^2>0 still holds true when the inflaton reaches around ϕ_0, the amplitude of the corresponding power spectrum decays with the expansion of the Universe. By numerically computing the integral given in Eq. (<ref>) and utilizing the relation (<ref>), we can determine the present energy spectrum of GWs induced by the amplified modes of the χ field, which is displayed in the bottom panel of Fig. <ref>. It is evident that the GW energy spectrum exhibits a peak at frequencies within the range detectable by LISA and Taiji, surpassing their sensitivity curves. This indicates that the χ field serves as an extremely efficient source of GWs in our model. However, it is important to note that these results are obtained under the assumption of neglecting the backreaction of the χ field. It has been pointed out in Ref. <cit.> that the energy conservation law imposes an upper bound on the energy density of the amplified field fluctuations. Specifically, it can be expressed as follows: ρ_f(N) < Δρ(N) = 1/2ϕ̇(N_1)^2+ V(ϕ(N_1)) - ρ_ pot(N) where the energy density of the χ field fluctuations ρ_f and the potential energy density of the inflaton ρ_ pot can be expressed, based on Eq. (<ref>), as follows ρ_f(N<N_1) = 1/1+ξ M_p^-2⟨χ^2⟩ [1/2⟨χ̇^2⟩+1/2 a^2⟨(∇χ)^2⟩+1/2g^2(ϕ-ϕ_0)^2⟨χ^2⟩ -6ξ H⟨χχ̇⟩+ξ/a^2⟨∇^2(χ^2) ⟩], ρ_ pot(N<N_1) = V(ϕ)/1+ξ M_p^-2⟨χ^2⟩. Since the g^2-term is dominant in Eq. (<ref>) at the time around N=N_2, we depict the evolution of g^2(ϕ-ϕ_0)^2⟨χ^2⟩/2 instead of ρ_f in Fig. <ref>. Furthermore, we include the evolution of Δρ in the plot as well. It is apparent from the plot that g^2(ϕ-ϕ_0)^2⟨χ^2⟩/2 exceeds Δρ, indicating a violation of the condition (<ref>). Therefore, the assumption of neglecting the backreaction from the χ field is an oversimplification. In next section, we will conduct a thorough analysis of χ-particle production, including the effects of backreaction on the evolution of the background and perturbations. § BACKREACTION In our previous discussion, we have highlighted the potential significance of the backreaction arising from the produced χ particles in the evolution of the background and perturbations. Thus, in this section, we carry out numerical computations of the coupled set of evolution equations (<ref>)-(<ref>) within Hartree approximation, along with the metric perturbation equation (<ref>). Let us now examine the impact of the substantial amplification of χ field fluctuations on the background evolution of the inflaton field. Taking into account the coupling potential between the inflaton and χ field, the inflaton possesses an effective potential given by V_ eff = V(ϕ) + 1/2g^2(ϕ-ϕ_0)^2⟨χ^2⟩. Prior to the inflaton approaching the vicinity of ϕ=ϕ_0, that is, prior to the onset of the tachyonic instability, we have V_ eff = V(ϕ) and the inflaton undergoes standard slow-roll evolution. As the inflaton reaches around ϕ_0, ⟨χ^2⟩ experience an exponential growth due to the tachyonic instability of specific modes of the χ field. If ⟨χ^2⟩ rapidly increases to a sufficiently large value, it causes a transition in the derivative of the effective potential dV_ eff/dϕ from a positive value to a negative value. Consequently, the global minimum of the effective potential V_ eff shifts to a field value very close to ϕ_0, as illustrated in the left panel of Fig. <ref>. Due to this shift, the inflaton starts oscillating near the minimum of its effective potential, leading to the premature termination of inflation. As a result, inflation is unable to provide a sufficiently large e-folding number. This situation arises in the model parameter space that leads to a significant tachyonic instability, such as the model parameters selected in the previous section. The right panel of Fig. <ref> provides a clear illustration of the inflaton's dynamical evolution, considering the backreaction effect for the previously chosen model parameters. It is easy to observe that the inflaton's behavior is consistent with the earlier description, where it enters a phase of oscillations around ϕ_0. This highlights the importance of taking into account the backreaction of the χ field when studying the evolution of inflaton. In this model, the parameter ξ is crucial in determining the strength of the tachyonic instability. A larger value of ξ leads to a more pronounced tachyonic instability. This can result in a more efficient χ-particle production, which can have important implications for the evolution of inflaton. To ensure that the inflation ends in its original way, i.e., through the slow-roll mechanism not dominated by the backreaction effect, an upper bound on the parameter ξ should be imposed. This upper bound ensures that the tachyonic instability is not too strong, preventing the χ field from dominating the dynamics of the inflaton field. The value of the upper bound on ξ depends on the specific inflaton potential and the choice of the parameters g and ϕ_0. In general, the upper bound can be determined by requiring that the slow-roll conditions are satisfied throughout the inflationary period, and that the backreaction of the χ field does not lead to a premature end of inflation. For the same value of g and ϕ_0 as before, the value of the upper bound on ξ has been found to be ∼ 4.16. In the left panel of Fig. <ref>, we show the evolutionary curves in the ϕ-ϕ̇ plane for the case of ξ=4.16 with same g and ϕ_0 as before. This figure illustrates the dynamics of the inflaton during the inflationary phase when the tachyonic instability of the χ modes is constrained by the upper bound on ξ. As the inflaton field rolls down its potential, it loses all most of its kinetic energy soon after the onset of χ-particle production. However, the tachyonic instability of the χ modes also come to an end when ϕ̇≃ 0. At this point, the potential energy of the inflaton, V(ϕ), still dominates the energy density of the Universe, ensuring that the inflationary phase continues. As the Universe expands, the expectation value of χ^2, denoted by ⟨χ^2⟩, quickly decays. This decay leads to a decrease in the coupling potential, which in turn allows the inflaton to return to its slow-roll trajectory. This behavior demonstrates that, even in the presence of tachyonic instability, the inflationary dynamics can be maintained as long as the upper bound on ξ is satisfied. In the right panel of Fig. <ref>, we present the evolution of the power spectrum at a specific scale for each of the field perturbations χ_k, δϕ_k, and Ψ_k, along with the curvature perturbation ℛ_k = Ψ_k + Hδϕ_k/ϕ̇. This figure illustrates that the χ_k mode undergoes an exponential growth and then rapidly decays after reaching its maximum value. As anticipated from perturbation equations (<ref>) and (<ref>), δϕ_k and Ψ_k do not grow due to the absence of a χ background. Interestingly, the evolution for the ℛ_k mode exhibits sharp changes before it becomes frozen outside the horizon. This occurs because the friction term in equation of motion for ℛ_k contains the slow-roll parameter η = ϕ̈/(Hϕ̇), and the drastic changes of ϕ̇, as observed in the left panel of Fig. <ref>, result in significant alterations of η, which in turn affect the evolution for the ℛ_k mode. As a consequence, the power spectrum of curvature perturbations for the modes that exit the horizon around the time when there is a significant change in the inflaton velocity will deviate from the near scale invariance expected in the usual slow-roll inflation. In Fig. <ref>, one can find that the resulting power spectrum of curvature perturbations have the oscillations at these scales. Finally, we present the current energy spectrum of the GW signal predicted by this model. While the energy spectrum of induced GWs remains significantly distant from the sensitivity curves of the space-based GW experiments, its peak surpasses the amplitude of primordial GWs predicted by the Starobinsky potential by more than an order of magnitude. Consequently, the total energy spectrum of the GW signal exhibits a distinct bump at specific scales. This characteristic could serve as a unique identifier for particle production during inflation, provided that future GW experiments are capable of detecting such subtle GW signal. § CONCLUSION AND DISCUSSION In this article, we explore the phenomenology of particle production for a spectator scalar field χ during inflation. The χ field is assumed to be coupled to both the inflaton and the Ricci scalar. The interaction between the χ field and gravity can cause the effective mass square of the χ field to become negative, which in turn triggers the tachyonic instability of specific χ modes. As a result, the amplified χ field will act as a GW source, generating a GW signal. If we disregard the backreaction of the χ field and select suitable model parameters, particularly the ξ value that is positively correlated with the strength of the tachyonic instability, specific modes of the χ field will be significantly amplified, making the resulting GW signal detectable by LISA and Taiji. However, in reality, the backreaction of the χ field will cause the premature termination of inflation in this case. To guarantee that the infaltion ends via the slow-roll mechanism, it is necessary to impose an upper bound on the parameter ξ. In the case of adopting this upper bound and taking into account the backreaction of the χ field, we observe that the inflaton velocity almost approaches zero shortly after the emergence of the tachyonic instability, however it quickly reverts to the slow-roll regime. This evolution of the inflaton results in a special oscillating structure in the power spectrum of curvature perturbations at certain scales. Moreover, even though the GW signal induced by the χ field is predicted to be very weak, its peak is more than an order of magnitude larger that the amplitude of primordial GWs. This, in turn, leads to a noticeable bump in the overall energy spectrum of GWs. Note that we focus on the phenomenology at small scales in this paper. Interestingly, if we shift our attention to the CMB scales by take the value of the parameter ϕ_0 close to the initial field value of the inflaton, the phenomena predicted in this model can lead to some intriguing results. On one hand, an appropriate scalar power spectrum with superimposed oscillations could potentially explain the large scale CMB anomalies as discussed in recent works <cit.>. On the other hand, the presence of an induced component in the total GW signal would violate the standard consistency relation between the tensor-to-scalar ratio and tensor spectra index, as predicted by the usual single-field slow-roll inflation. These topics are fascinating and warrant further investigation in future studies. We thank Zhi-Zhang Peng for useful discusstions. This work is supported in part by the National Key Research and Development Program of China Grant No. 2020YFC2201501 , in part by the National Natural Science Foundation of China under Grant No. 12075297 and No. 12235019. § THE BASIC EQUATIONS WITH BACKREACTION OF PERTURBATIONS In this appendix, we derive the basic equations with backreation of perturbations by following the reference <cit.>. It is common to separate the inflaton field into a homogeneous background ϕ, and a perturbation δϕ. In the case of the χ field, it is not necessary to make a similar separation because it is already considered as a quantum field. Throughout this paper, we work with the spatially flat FRW metric in the conformal Newtonian gauge, and then the perturbed metric, incorporating both the first-order scalar metric perturbation Ψ and the second-order tensor perturbation h_ij, can be written as d s^2=-(1+2Ψ)dt^2+a^2[(1-2Ψ) δ_i j+h_i j/2]d x^i d x^j. A common method for approximately estimating the impact of field fluctuations on the background and perturbation evolution is to incorporate Hartree terms into the equation of motion <cit.>. In this approximation, the background equations are as follows: 3M_p^2H^2 = 1/1+ξ M_p^-2⟨χ^2⟩[1/2ϕ̇^2+1/2⟨δϕ̇^2⟩+1/2⟨χ̇^2⟩+1/2 a^2⟨ (∇δϕ)^2⟩+1/2 a^2⟨(∇χ)^2⟩. . -6ξ H⟨χχ̇⟩+ ξ/a^2⟨∇^2(χ^2) ⟩+V(ϕ)+1/2∂^2 V /∂ϕ^2⟨δϕ^2 ⟩+1/2g^2(ϕ-ϕ_0)^2⟨χ^2 ⟩], -M_p^2(2Ḣ+3H^2)= 1/1+ξ M_p^-2⟨χ^2⟩[ 1/2ϕ̇^2 +1/2⟨δϕ̇^2⟩ +( 1/2 +2ξ)⟨χ̇^2⟩ - 1/6a^2⟨(∇δϕ)^2⟩. . - 1/6a^2⟨(∇χ)^2⟩ - 2ξ/3 a^2⟨∇^2(χ^2)⟩ + 4ξ H⟨χχ̇⟩ + 2ξ⟨χχ̈⟩ - V(ϕ) . .- 1/2∂^2 V/∂ϕ^2⟨δϕ^2⟩ - 1/2 g^2(ϕ-ϕ_0)^2⟨χ^2⟩], ϕ̈+3 H ϕ̇+∂ V/∂ϕ+1/2∂^3 V/∂ϕ^3⟨δϕ^2⟩+g^2(ϕ-ϕ_0)⟨χ^2⟩=0, where the notation ⟨...⟩ represents the expectation value, calculated by e.g. ⟨χ^2⟩ = 1/(2π)^3∫ d^3k |χ_k|^2. Then, the momentum-space linearized field perturbation equations are given by δϕ̈_k+3H δϕ̇_k+Ω_k^2 δϕ_k =4Ψ̇_kϕ̇-2∂ V/∂ϕΨ_k, with Ω_k^2=k^2/a^2+∂^2 V/∂ϕ^2+g^2⟨χ^2⟩+1/2∂^4 V/∂ϕ^4⟨δϕ^2⟩, and χ̈_k+3H χ̇_k+ω_k^2 χ_k =0, with ω_k^2=k^2/a^2+g^2(ϕ-ϕ_0)^2+g^2⟨δϕ^2⟩-6ξ(Ḣ^2+2H^2), where Ψ_k obeys following perturbed Einstein equations, 3HΨ̇_k + ( k^2/a^2 + 3H^2 ) Ψ_k = - 1/2 M_p^2( ϕ̇δϕ_k - Ψ_kϕ̇^2 + dV/dϕδϕ_k ), Ψ̇_k + H Ψ_k =ϕ̇/2 M_p^2δϕ_k. Equations (<ref>) and (<ref>) can be combined to give Ψ_k=ϕ̇δϕ̇_k+3Hϕ̇δϕ_k+(∂ V/∂ϕ) δϕ_k/ϕ̇^2- 2 M_p^2(k/a)^2, Next, we derive the equation of motion for the second-order tensor perturbation h_ij, given by h_i j^''(τ,x)+2 ℋh_i j^'(τ,x) -∇^2 h_i j(τ,x)=4/M_p^2π_i j^TT(τ,x), where a prime denotes the derivative with respect to the conformal time τ≡∫^t dt/a, and ℋ≡ a^'/a denotes the conformal Hubble parameter. The source term on the right side of the equation can be written as π_i j^T T=π̂_i j^l m[(1+2ξ)∂_l χ∂_m χ+2ξχ∂_m ∂_l χ], where π̂_i j^l m is the transverse-traceless projection operator. It should be noted that the contribution of the ϕ field perturbations to the GW source term is disregarded in light of their lack of enhancement in this model. By virtue of the polarization tensor e^λ_i j with λ=+,×, we can expand the tensor perturbation and the source term in Fourier space, respectively, as h_i j= ∑_λ=+,×∫d k^3/(2π)^3/2e^ik·xe^λ_i j(k)h_k^λ(τ), π_i j^T T = ∑_λ=+,×∫d k^3/(2 π)^3/2 e^i k·xe^λ_i j e^λ,l m∫d p^3/(2 π)^2/3χ_|k-p|χ_p[(2ξ+1) (p_l p_m - k_l p_m )-2ξ p_l p_m ] = ∑_λ=+,×∫d k^3 d p^3/(2 π)^3 e^i k·xe^λ_i j e^λ,l m p_l p_m χ_ | k-p |χ_p. By putting above formulas substitute into Eq. (<ref>), we have h_k^λ''+2ℋ h_k^λ'+k^2 h^λ_k=4 /M_p^2∫d^3 p/(2π)^3/2e^λ,ijp_ip_j χ_|k-p|χ_p. We solve this equation through the Green’s function method, i.e., h^λ_k(τ) = 4 /M_p^2∫^τ dτ^' G_k(τ,τ^') ∫d^3 p/(2π)^3/2e^λ,ijp_ip_j χ_|k-p|χ_p, where G_k(τ,τ^') is the Green’s function. The Starobinsky potential is flat enough to use the de-Sitter approximation to get the Green's function, so that a=-1/(Hτ) and the Green’s function reads <cit.> G_k(τ, τ^')= 1/k^3τ^' 2[(1+k^2 ττ^') sin k(τ-τ^')-k(τ-τ^') cos k(τ-τ^')] Θ(τ-τ^'). After some algebraic operation. The power spectrum, 𝒫_h = ∑_λ=+,×k^3/(2π^2)|h_k^λ|^2, can be calculated as 𝒫_h(k)=2 k^3/π^4 M_pl^4∫_0^π d θsin ^5 θ∫ d p p^6|∫^τ d τ^' G_k(τ, τ^')χ_pχ_|k-p||^2. The current energy spectrum of GWs is related to the power spectrum of tensor perturbations through <cit.> Ω_GW, 0(k) ≃ 1.7 × 10^-7𝒫_h(k). *
http://arxiv.org/abs/2307.02342v1
20230705145603
Towards a Formal Verification of the Lightning Network with TLA+
[ "Matthias Grundmann", "Hannes Hartenstein" ]
cs.LO
[ "cs.LO", "cs.CR", "cs.DC" ]
Towards a Formal Verification of the Lightning Network with TLA+ Matthias Grundmann, Hannes Hartenstein KASTEL Security Research Labs Karlsruhe Institute of Technology (KIT) Karlsruhe, Germany ======================================================================================================================================================== Payment channel networks are an approach to improve the scalability of blockchain-based cryptocurrencies. Because payment channel networks are used for transfer of financial value, their security in the presence of adversarial participants should be verified formally. We formalize the protocol of the Lightning Network, a payment channel network built for Bitcoin, and show that the protocol fulfills the expected security properties. As the state space of a specification consisting of multiple participants is too large for model checking, we formalize intermediate specifications and use a chain of refinements to validate the security properties where each refinement is justified either by model checking or by a pen-and-paper proof. TLA+, Model Checking, Blockchain, Payment Channel Networks § INTRODUCTION Blockchain-based cryptocurrencies do not scale well with respect to their transaction throughput. One approach to improve said scalability are Payment Channel Networks – a second layer on top of a blockchain that processes transactions without writing each transaction to the blockchain. A payment channel between two users is opened by performing one transaction on the underlying blockchain. Once a payment channel is open, it allows for performing an unlimited number of transactions between its participating users without writing to the blockchain. Finally, a payment channel is closed by publishing a second transaction on the blockchain. In a payment channel network, the participating users are connected by payment channels and can perform multi-hop transactions so that the sender and the recipient of a transaction do not need to have a payment channel directly connecting them but it suffices that a path between sender and recipient over a set of payment channels exists. The security model for payment channels requires that honest users cannot loose their funds even if all other users behave adversarially. To avoid financial loss caused by design flaws in a payment channel protocol, it should be verified that the protocol is secure. In this paper, we analyze the security of the protocol of the Lightning Network <cit.>, a payment channel network for Bitcoin <cit.>, for which different implementations exist and which is used in practice. Our goal is to verify security properties of the Lightning Networks' protocol. While this goal has already been approached in previous work <cit.>, we aim at verifying the properties in a largely machine-checked way because the complexity of the Lightning Network's protocol is difficult to handle. Our general approach is to formalize the Lightning Network's protocol in TLA+ and verify using model checking that the protocol fulfills the security property that honest parties retrieve at least their correct balance. Due to the complexity of the Lightning Networks' protocol, we cannot directly model check the protocol specification. Instead, we use model checking for the most difficult proof steps and we provide pen-and-paper proofs to extend the statements about specifications that we can model check to the whole protocol specification. To concretize, we formalize an ideal functionality of a payment channel that abstracts the behavior of the Lightning Network's protocol. We show that the formalized protocol specification of a payment channel refines the ideal channel functionality by explicitly specifying a refinement mapping between the formalized protocol specification and the ideal channel functionality. We verify the validity of the refinement mapping using a model checker. By using the ideal channel functionality, we specify an abstraction of the Lightning Network's protocol for multi-hop payments. We use a model checker to verify that this specification of idealized channel functionalities implements the security properties for multi-hop payments (e.g., dishonest parties cannot steal money). As the formalized protocol specification refines the specification of idealized channel functionalities, it follows that the formalized protocol also implements the security properties. We describe the Lightning Network's protocol in more detail in <ref> and give a brief summary of related work in <ref>. We give an overview of our approach in <ref> and describe how we show the individual proof steps. Our work is still in-progress and, thus, not all steps are complete but with this work-in-progress report we aim to give an introduction to the overall approach. We do not provide the proof steps in detail but elaborate on the proof ideas. § FUNDAMENTALS §.§ Payment Channels: Single-Hop Payments A payment channel is a protocol between two users that enables these two users to deposit coins into the payment channel during opening, perform transactions between the two users by updating the payment channel, and retrieving their final funds by closing the payment channel. At every state of the protocol, each user is guaranteed to be able to close the channel to retrieve their current balance independent of cooperation of the other user. Even with an actively malicious channel partner, an honest user cannot loose their funds as long as the user actively monitors the underlying blockchain and reacts to malicious closing attempts. On a high level, a payment channel is implemented as a shared account: The two users open the shared account by depositing coins into the shared account and store the allocation of the funds, i.e., which user owns how many coins, in their current state. To perform a transaction sending x coins from one user to the other user, both users agree on a new allocation of funds in which x coins are deducted from the sender's share of funds and added to the recipient's share of funds. By updating their state, both users can perform an unlimited amount of transactions between each other just based on communication between each other. To fulfill the security guarantees, it needs to be ensured that the payment channel can be closed only in a state that represents the latest allocation of funds. Particularly, the channel may not be closed with an outdated allocation of funds and an honest user must be able to close a channel in a state with the latest allocation of funds. More technically, a payment channel is opened in the Lightning Network by creating a funding transaction[See BOLT 2 and BOLT 3, <https://github.com/lightning/bolts/blob/master/00-introduction.md>.]. The funding transaction has an input spending an output from the funding user (funder)[At present, the Lightning Network supports only single-funded channels, i.e. only one user deposits coins into the channel during opening.] and the funding transaction has a multi-sig output that is spendable only by the two users in the channel together. Just publishing the funding transaction on the blockchain would create a dependence of the funder on the other user for spending the funding transaction's output as the funding transaction's output can only be spent by the two users together. To prevent such a dependence, an initial commitment transaction that spends the funding transaction's output is created by the two users and the non-funding user sends their signature for the initial commitment transaction to the funder who only publishes the funding transaction after receiving this signature. A commitment transaction has at least two outputs: One output for each user that is redeemable only by this user and has an amount that corresponds to the balance the user currently has. In the initial commitment transaction, all funds are spendable by the funder.[This is a simplification; the Lightning Network's specification allows the funder to send a small amount to the non-funding user already in the initial commitment transaction (see ).] For a payment from one user to the other, an HTLC (Hash Timelocked Contract) is added to the channel. These HTLCs will also be used for multi-hop payments. An HTLC is a contract that encodes the agreement that the recipient receives a specified amount if the recipient proves knowledge of a preimage to a specified hash before a specified time has passed. To make a payment using an HTLC, the channel is updated to add the HTLC. The HTLC is added by adding a dedicated output that represents the HTLC to a new commitment transaction. The amount of coins that are part of the HTLC are deducted from the payment's sender's output in the new commitment transaction. After the HTLC is committed to the payment channel, the recipient of the payment fulfills the HTLC by sending the preimage to the sender of the payment. Then, the channel is updated by creating a new commitment transaction without the HTLC output to remove the HTLC and, in the new commitment transaction, the HTLC's amount is added to the recipient's balance. If the recipient does not fulfill the HTLC before the timelock, the HTLC is also removed but the HTLC's amount is added back to sender's balance. For an update of the channel, the sender of the payment creates a new commitment transaction and sends a signature for this new commitment transaction to the payment's recipient. Now, the recipient has two valid versions of the commitment transaction: The current and the new commitment transaction which are both signed by the payment's sender. Both versions of the commitment transaction are valid and can be published on the blockchain. As a malicious user might publish an outdated commitment transaction, commitment transactions should be `revoked' so that they cannot be published anymore. As a signature to a commitment transaction cannot be undone, the Lightning Network uses an approach for revocation that relies on incentives: A user can be punished for publishing an outdated commitment transaction. For each commitment transaction there exists a revocation key pair. With knowledge of the private revocation key, one user can spend all outputs of the commitment transaction that the user's counterpart in the channel has published. In this way, the transaction's outcome is revoked while the transaction itself is persisted. During an update of a channel, both users send each other their signature for the new commitment transaction and reply by sending the private revocation key for the now outdated commitment transaction to revoke the outdated commitment transaction. As the users do not have the private revocation key for the current commitment transaction of their counterpart, they cannot punish each other for correct behavior like publishing the current commitment transaction. For the security of the protocol it is crucial that each user has the necessary private revocation keys for the states that are outdated and that the other user in the channel does not have the private revocation key for a state that is considered the latest state. §.§ Payment Channel Networks: Multi-Hop Payments If two users do not have a common payment channel but they are connected over a path of payment channels of other users, they can make multi-hop payments between each other. The intermediate users forward the payment over their channels and might receive a small fee for their service. To prevent intermediaries from stealing or loosing coins, it should be guaranteed for a multi-hop payment that each intermediary receives an incoming payment on one channel iff the intermediary forwards the payment on another channel. Also the sender should send the payment to an intermediary iff the recipient receives the payment from an intermediary. The Lightning Network uses HTLCs for multi-hop payments to achieve these security properties. The recipient of a payment draws a random value x and calculates the hash value y = H(x) using a cryptographic hash function H. The recipient sends y to the sender of the payment. The sender of the payment creates an HTLC with the first intermediary using y as the hash condition for the HTLC. The intermediary creates an HTLC with the next hop and each intermediary repeats this process until the last intermediary creates an HTLC with the recipient of the payment. The recipient knows the preimage x for the hash condition y and fulfills the HTLC by sending x to the last intermediary. By fulfilling the HTLC, the payment's recipient receives the payment's amount from the last intermediary. Again, each intermediary forwards the secret value x fulfilling the HTLCs along the route until the sender receives x and pays the first intermediary. The timelocks of the HTLCs are chosen in a descending order from the sender to the recipient, so that each intermediary has enough time to fulfill the incoming HTLC from the previous hop if the next hop fulfills the outgoing HTLC. §.§ TLA+ The Temporal Logic of Actions (TLA) <cit.> is a temporal logic to reason about properties of a system. The language TLA+ is based on TLA and can be used to formalize the behavior of system. Using tools like a model checker (TLC) or a theorem prover (TLAPS), invariants and properties can be shown to be valid for a formalized system. In TLA+, the state of a system is described by a set of variables vars. A system is defined by defining a set of initial states for which the formula Init is valid and by defining an action Next that determines which steps are allowed for the system to change its state. Using these components, a system is represented as a formula Init [ Next ]_vars. An additional conjunct may be a fairness condition that asserts that certain steps must be taken if they are continuously allowed. The Next action is typically a disjunct of multiple subactions the define different ways for the system's state to be updated. These (sub)actions can be grouped into modules. Each module can be instantiated multiple times for different sets of variables. § RELATED WORK TLA+ is used in the industry <cit.> and there are also examples in the scientific literature how TLA+ has been used to reason about the properties of protocols: Narayana et al. <cit.> used TLA+ to search for vulnerabilities in IEEE 802.16 WiMAX protocols. Lu et al. <cit.> used TLA+ to verify properties of core algorithms of the Pastry protocol. Braithwaite et al. <cit.> used TLA+ for specifying and model checking a core protocol of Tendermint blockchains. Further, TLA+ was used to verify firewalls <cit.>, the ZooKeeper atomic broadcast protocol <cit.>, a design for state channels <cit.>, for checking security properties of smart contracts <cit.>, and for proving properties of Cross-Chain swaps <cit.>. The Lightning Network's protocol was formalized before by Kiayias and Thyfronitis Litos <cit.>. They formalized an ideal functionality and used the UC framework <cit.> to prove that the Lightning Network's protocol securely implements this ideal functionality. Compared to our formalization, the protocol formalization of <cit.> considers more details about the cryptographic aspects. While working on our TLA+ formalization of the Lightning Network's protocol, we found two subtle flaws in the formalization of <cit.> of the Lightning Network's protocol that render the formalized protocol insecure. However, we believe that these flaws can be corrected and that the Lightning Network's protocol fulfills the ideal functionality formalized in <cit.>. The first flaw concerns an incomplete description of how a user reacts to maliciously published outdated transactions. The second flaw is more subtle and concerns how the data in an input is linked to the spending methods of an output that is spent by this input. A detailed description of the flaws can be found in <ref>. While we found the first flaw by comparison of our formalization to the formalization in the paper, we found the second flaw only by model checking when we had a similar flaw in a draft of our formalization. While the specific flaws can be fixed with low effort, it is difficult and tedious to find such flaws in a pen-and-paper proof. Using our approach of model checking instead, such issues can be revealed automatically. § VERIFICATION OF SECURITY PROPERTIES OF THE LIGHTNING NETWORK'S PROTOCOL §.§ Overview §.§.§ Formalization of the Lightning Network's Protocol For the formalization of the specification of the protocol we build upon and extend the work of Grundmann et al. <cit.>. The formalization describes all possible actions how a user of the payment channel initiates transactions or reacts to messages or events. Messages are exchanged by the two users inside a payment channel by writing messages to a message queue per user. Messages can be arbitrarily delayed but are delivered in-order. In its structure, the formalization of the protocol specification follows the specification of the Lightning Network[<https://github.com/lightning/bolts/blob/master/02-peer-protocol.md>]. The formalization abstracts, however, multiple implementation details and parts that are not part of the main functionality such as fees and error messages. The TLA+ specification of the protocol consists of three modules: Two modules concern the specification of actions that a user performs for the execution of the payment channel protocol: HTLCUser specifies the actions concerning HTLCs for multi-hop payments, e.g., sending an invoice, creating an HTLC, fulfilling an HTLC. PaymentChannelUser specifies how the payment channel is created, how the payment channel is updated when a new HTLC is added or a fulfilled HTLC is persisted, how the payment channel is closed, how an adversary can cheat, and how the honest user punishes a cheating user. More specifically, these actions include for example actions for creating and sending a signature of a new commitment transaction to the other user, processing messages from the other user, or publishing a commitment transaction on the blockchain to close the channel. The third module is LedgerTime, the clock that increases the current time. Time is measured in the Lightning Network's protocol by the block count of the Bitcoin blockchain. Thus, it is represented as a natural number and increased in integer steps. The specification puts these three modules together by having a single LedgerTime module and by instantiating the HTLCUser module for each modeled user and one instance of PaymentChannelUser per channel for each user. Formally, the TLA+ specification is defined by a set of initial states and a Next action that describes possible steps that can lead from one state to a new state. The Next action for a specification with three users and two channels can be found in <ref>. This formal specification is pictorially represented in <ref>. §.§.§ Security Properties Our goal is to show that this specification implements functional properties and fulfills security properties, e.g., (1) an honest user finally receives the user's correct balance even if other users act maliciously by publishing outdated states on the blockchain, or (2) the balances after a multi-hop payment are correct, i.e., the payment's amount is deducted from the sender's balance, any intermediate's balance stays the same[An intermediate's balances change in the two payment channels that the intermediate uses for forwarding a payment but the overall balance of the intermediate in all payment channels of the intermediate should stay the same.], and the payment's amount is added to the recipient's balance. We formalized the security properties in form of an idealized payment network functionality whose full specification can be found in <ref>. This ideal payment network functionality models a payment network in which users start with an external balance (stored in variable ExtBalances), deposit assets, make payments (variable Payments), and withdraw their assets. User can be dishonest (variable Honest). The ideal functionality specifies that dishonest users cannot steal money from other users. Instead, dishonest users might be punished for cheating by loosing a part of their balance. A system for which the external variables ExtBalances, Payments, and Honest have values that are allowed by the ideal payment network functionality is secure, i.e., balances are computed correctly and dishonest users can only loose but not gain assets. §.§.§ Challenge: Exploring the State Space Because the order of how messages are sent and processed in the payment channel protocol can vary, there are many different possible executions of the protocol. The state space explodes if two or more payment channels are modeled because there is a large amount of different combinations of the states the payment channels can be in. Therefore, a specification modeling multiple payment channels is too large for model checking. To verify the security properties of a specification for multiple payment channels nevertheless, we use the following approach. §.§.§ Approach: Abstracting Specification of a Payment Channel We specify an idealized multi-hop specification that uses an ideal functionality of a payment channel instead of a formalization of the real protocol. For the idealized multi-hop specification, we replace the two instances of PaymentChannelUsers per channel by an instance of the module IdealChannel. A pictorial representation is shown in <ref> and the Next action is shown in <ref>. The module IdealChannel specifies the functionality of a payment channel on a coarser granularity: The actions describe the changes to both parties' state simultaneously and abstract from the exchange of messages on the protocol level as specified in PaymentChannelUser. Abstracting the behavior of one payment channel reduces the state space and, together with an optimization of the LedgerTime module, it allows for model checking the combination of multiple payment channels in multi-hop payments. We use the idealized multi-hop specification to show that the specification of the Lightning Network's protocol fulfills the security properties: We show that the idealized multi-hop specification fulfills the security properties and we extend this result to the protocol specification by showing that the protocol specification is a refinement of the idealized multi-hop specification. The protocol specification refines the idealized multi-hop specification iff for every behavior of the protocol specification there exists a behavior of the idealized multi-hop specification for which the externally visible variables (i.e., ExtBalances, Payments, Honest) have the same values. As the security properties rely on the externally visible variables only, the protocol specification fulfills the security properties if the idealized multi-hop specification does. From a security perspective the refinement means that every attack that is possible in the protocol specification must also be possible in the idealized multi-hop specification. Showing the absence of attacks in the idealized multi-hop specification, thus, shows that the protocol specification is secure. §.§ Proof sketch In the following, we given an overview of our proof and the most important arguments. The proof's structure is graphically shown in <ref>. In the following, we refer to specifications with Roman numbers (I) to (X) as represented in <ref> and (partially) defined in <ref>. In the proof sketch shown in in <ref>, the ideal payment network functionality is depicted as (X). Our goal is to show that the specification of the Lightning Network's protocol (I) fulfills the security properties, i.e. we show the validity of the formula (I) ⇒ (X) in which the free variables are ExtBalance, Payments, and Honest. As specification (I) is too complex for model checking, we specify two abstractions of this specification ((VIII) and (IX)) so that abstraction (IX) can be model checked. The first abstraction step is to abstract the formalization of multi-hop payments using the Lightning Network's protocol to a formalization of multi-hop payments using the idealized channel specification ((I) ⇒ (VIII)). The second abstraction is to abstract time by grouping equivalent behaviors that differ only by the timestamps ((VIII) ⇒ (IX)). Then, we use a model checker to show that multi-hop payments with idealized payment channels (IX) are a refinement of the idealized payment network specification (X). To show that (I) ⇒ (VIII), we decompose the specification of multiple payment channels into the specification of a single payment channel and show that each payment channel implemented by the Lightning Network's protocol is a refinement of the idealized payment channel specification. To include a specification of the environment of a single payment channel, we specify the module MultiHopMock. An instance of the module MultiHopMock for one specific user abstracts the behavior that other payment channels and users can have on this user's variables, i.e., every action that can happen in the payment channel between Alice and Bob is an action of the instance of MultiHopMock for Charlie who has a channel with Bob. Having the module MultiHopMock abstracting other users and payment channels allows us to compose the specifications of single payment channels back to multi-hop payments when using the idealized specification for a payment channel ((VI) ⇒ (VII)). We show that multi-hop payments using the Lightning Network's protocol and the MultiHopMock are a refinement of multi-hop payments using the idealized channel specification and the MultiHopMock ((II) ⇒ (VII)). We show that (II) ⇒ (VII) by first decomposing the specification of multiple payment channels into a single payment channel ((II) ⇒ (IV)). Then, we show by specifying a refinement mapping verified by a model checker that the specification of the Lightning Network's protocol refines the idealized channel specification ((IV) ⇒ (V)). As this step reduces the complexity of the Lightning Network's protocol into a simpler ideal functionality, this step is the most difficult part of the proof. To verify the correctness of the step, we use a model checker. Then, we show how this result extends to multi-hop payments using idealized payment channels ((V) ⇒ (VII)). Having shown that there exists a refinement mapping for (II) ⇒ (VII), we show that (I) ⇒ (VIII) by showing that the subset of specification (II) that equals specification (I) is mapped by the refinement mapping to the subset of specification (VII) that equals specification (VIII). In the following, we elaborate on the intuition behind these proof steps. §.§.§ (II) ⇒ (III) Compared to the protocol specification (I), the specification (II) adds for each modeled user an instance of MultiHopMock, a module that abstracts effects that other users can have on one user. We show that each single payment channel in specification (II) behaves as specified by specification (III). In informal words, this means that if we look at only the variables that are relevant for one single payment channel then specification (II) refines specification (III). Specification (II) consists of an instance of LedgerTime, one instance of HTLCUser per user, one instance of MultiHopMock per modeled user, and two instances of PaymentChannelUser per payment channel. A step of the specification (II) can be a step of any of these modules. We refer to the set of variables of specification (II) that concern the payment channel AB as v_AB. Considering v_AB, we show that the instances of PaymentChannelUser of any other payment channel and the instances of HTLCUser of any user not being part of the payment channel AB are a refinement of the instance of MultiHopMock for channel AB. Intuitively, this means for a channel AB that every step that happens in another payment channel does either not affect the variables of AB or is also a step by MultiHopMock which is also part of specification (III). Therefore, each step of specification (II) that changes the variables v_AB, is also a step of specification (III) for the same set of variables. §.§.§ (III) ⇒ (IV) While specification (III) includes an instance of LedgerTime that allows incrementing the current time by single time steps, specification (IV) uses an instance of an optimized LedgerTime module that skips points in time that lead to equivalent future behaviors. The intuition why this is possible is that, in the payment channel protocol, the current time is only used for checking whether a point in time has already passed or not. For each such condition, two points in time that are both on the same side of the comparison lead to the same possible steps. We show that (III) ⇒ (IV) by showing that for each behavior of specification (III) there exists a behavior of specification (IV) where the time might be different but all other variables have the same values. §.§.§ (IV) ⇒ (V) On the one hand, showing that specification (IV) formalizing the Lightning Network's protocol is a refinement of specification (V) formalizing an idealized payment channel is the most difficult refinement of the chain of refinements that we show. On the other hand, the simplifications of the previous refinements from specification (I) to (IV) result in specification (IV) having a state space that is explorable using model checking. We show that (IV) ⇒ (V) by explicitly formalizing a refinement mapping between specification (IV) and specification (V) and validating this refinement mapping by model checking. §.§.§ (V) ⇒ (VI) Specification (V) trivially refines specification (VI) because the only change between the two specifications is that specification (V) uses an optimized LedgerTime instead of the regular LedgerTime and each step of the optimized LedgerTime is also a step of the regular LedgerTime module. §.§.§ (VI) ⇒ (VII) Compared to specification (VI) for a single payment channel, specification (VII) composes multiple payment channels. A behavior of specification (VI) is also a behavior of specification (VII) in which no steps are taken in other payment channels. Thus, (VI) ⇒ (VII). §.§.§ (I) ⇒ (VIII) From above, we conclude that (II) ⇒ (VII). The composition of refinement mappings of the individual steps defines a refinement mapping from (II) to (VII) that we call f in the following. We argue that the protocol implementation without the module MultiHopMock also refines the idealized specification without the module MultiHopMock ((I) ⇒ (VIII)) because restricting the domain of the refinement mapping f to the behaviors allowed by specification (I) results in a refinement mapping from (I) to (VII). This follows from the fact that by construction of the individual refinement mappings that the refinement mapping f is composed of, the refinement mapping f affects only steps of the module PaymentChannelUser which are mapped to stuttering steps or steps of the module IdealChannel. The refinement mapping f maps a step of the module MultiHopMock or the module HTLCUser or the module LedgerTime in specification (II) to a step of MultiHopMock (resp. HTLCUser, LedgerTime) in specification (VII). It follows that the refinement mapping f is a also a refinement mapping for (I) ⇒ (VIII). §.§.§ (VIII) ⇒ (IX) To get a specification with a reduced state space, we replace the regular LedgerTime module in specification (VIII) by an optimized LedgerTime module and retrieve specification (IX). Analogously to the refinement (III) ⇒ (IV), this optimization does not drop any behaviors and, thus, (VIII) ⇒ (IX). §.§.§ (IX) ⇒ (X) Through model checking, we show that specification (IX) is a refinement of the idealized payment network functionality (X) and, thus, fulfills the security properties. §.§.§ Conclusion From the refinements above, we conclude that (I) ⇒ (X), i.e. the specification of the Lightning Network's protocol is an implementation of an idealized payment network and fulfills the security properties. § CONCLUSION While the Lightning Network's protocol is complex and many different states can be reached by a model checker, we have presented an approach that makes it possible to verify the security of the protocol in a partially machine-checked way. The approach uses model checking for two proof steps that are difficult to reason about: Showing that a single payment channel refines the ideal channel functionality and showing that the protocol for multi-hop payments is secure. We are currently working on fully formalizing every step of the proof. IEEEtran §.§ Idealized Payment Network Specification <ref> shows the specification of an idealized payment network. It can easily be checked that this specification is secure, i.e., an honest party can withdraw at least their correct balance. The Next action of the specification is a disjunct of five actions: DepositBalance is for one user to deposit balance into the payment network. WithdrawBalance can be used to withdraw the balance of one or two users from the payment network. ProcessPayment processes a payment that has initially been specified in the Payments variable. This action simply removes the payment's amount from the sender's balance and adds the same amount to the recipient's balance. AbortPayment models aborted payments by removing the payment from the Payment variable without any further change to the state. PunishCheating models a cheating dishonest party that is punished by another party. As a punishment the punishing party retrieves a part of the dishonest party's balance. §.§ Next Actions of Specifications Each specification shown in <ref>, is defined by an Init predicate that defines the set of initial states, a Next action that defines possible steps, and a fairness condition. In the following, we show how the Next action of each specification is composed. The Next actions of the modules used in these definitions are shown in <ref>. pvspace8.0pt x Next_I xs16.4 LedgerTime_Next xs16.4 HTLCUserNext ( Alice ) xs16.4 HTLCUserNext ( Bob ) xs16.4 HTLCUserNext ( Charlie ) xs16.4 PaymentChannelUserNext ( AB , Alice ) xs16.4 PaymentChannelUserNext ( AB , Bob ) xs16.4 PaymentChannelUserNext ( BC , Bob ) xs16.4 PaymentChannelUserNext ( BC , Charlie ) pvspace8.0pt x Next_II xs16.4 LedgerTime_Next xs16.4 HTLCUserNext ( Alice ) xs16.4 HTLCUserNext ( Bob ) xs16.4 HTLCUserNext ( Charlie ) xs16.4 PaymentChannelUserNext ( AB , Alice ) xs16.4 PaymentChannelUserNext ( AB , Bob ) xs16.4 PaymentChannelUserNext ( BC , Bob ) xs16.4 PaymentChannelUserNext ( BC , Charlie ) xs16.4 MultiHopMockNext ( Alice ) xs16.4 MultiHopMockNext ( Bob ) xs16.4 MultiHopMockNext ( Charlie ) pvspace8.0pt x Next_III xs16.4 LedgerTime_Next xs16.4 HTLCUserNext ( Alice ) xs16.4 HTLCUserNext ( Bob ) xs16.4 PaymentChannelUserNext ( AB , Alice ) xs16.4 PaymentChannelUserNext ( AB , Bob ) xs16.4 MultiHopMock_Next pvspace8.0pt x Next_IV xs16.4 OptimizedLedgerTime_Next xs16.4 HTLCUserNext ( Alice ) xs16.4 HTLCUserNext ( Bob ) xs16.4 PaymentChannelUserNext ( AB , Alice ) xs16.4 PaymentChannelUserNext ( AB , Bob ) xs16.4 MultiHopMock_Next pvspace8.0pt x Next_V xs16.4 OptimizedLedgerTime_Next xs16.4 HTLCUserNext ( Alice ) xs16.4 HTLCUserNext ( Bob ) xs16.4 IdealChannelNext ( AB ) xs16.4 MultiHopMock_Next pvspace8.0pt x Next_VI xs16.4 LedgerTime_Next xs16.4 HTLCUserNext ( Alice ) xs16.4 HTLCUserNext ( Bob ) xs16.4 IdealChannelNext ( AB ) xs16.4 MultiHopMock_Next pvspace8.0pt x Next_VII xs16.4 LedgerTime_Next xs16.4 HTLCUserNext ( Alice ) xs16.4 HTLCUserNext ( Bob ) xs16.4 HTLCUserNext ( Charlie ) xs16.4 IdealChannelNext ( AB ) xs16.4 IdealChannelNext ( BC ) xs16.4 MultiHopMockNext ( Alice ) xs16.4 MultiHopMockNext ( Bob ) xs16.4 MultiHopMockNext ( Charlie ) pvspace8.0pt x Next_VIII xs16.4 LedgerTime_Next xs16.4 HTLCUserNext ( Alice ) xs16.4 HTLCUserNext ( Bob ) xs16.4 HTLCUserNext ( Charlie ) xs16.4 IdealChannelNext ( AB ) xs16.4 IdealChannelNext ( BC ) pvspace8.0pt x Next_IX xs16.4 OptimizedLedgerTime_Next xs16.4 HTLCUserNext ( Alice ) xs16.4 HTLCUserNext ( Bob ) xs16.4 HTLCUserNext ( Charlie ) xs16.4 IdealChannelNext ( AB ) xs16.4 IdealChannelNext ( BC ) §.§ Next Actions of Modules In the following, we list Next actions of the modules used in our specifications. Each action is a conjunct of multiple subactions of which the definitions are not printed. pvspace8.0pt x LedgerTime_Next xs16.4 AdvanceLedgerTime pvspace8.0pt x OptimizedLedgerTime_Next xs16.4 OptimizedAdvanceLedgerTime pvspace8.0pt x HTLCUser_Next xs16.4 RequestInvoice xs16.4 GenerateAndSendPaymentHash xs16.4 ReceivePaymentHash xs16.4 AddAndSendOutgoingHTLC xs16.4 ReceiveUpdateAddHTLC xs16.4 SendHTLCPreimage xs16.4 ReceiveHTLCPreimage xs16.4 SendHTLCFail xs16.4 ReceiveHTLCFail pvspace8.0pt x IdealChannel_Next xs16.4 OpenPaymentChannel xs16.4 UpdatePaymentChannel xs16.4 CommitHTLCsOnChain xs16.4 FulfillHTLCsOnChain xs16.4 WillPunishLater xs16.4 ClosePaymentChannel pvspace8.0pt x PaymentChannelUser_Next xs16.4 SendOpenChannel xs16.4 SendAcceptChannel xs16.4 CreateFundingTransaction xs16.4 SendSignedFirstCommitTransaction xs16.4 ReplyWithFirstCommitTransaction xs16.4 ReceiveCommitTransaction xs16.4 PublishFundingTransaction xs16.4 NoteThatFundingTransactionPublished xs16.4 SendNewRevocationKey xs16.4 ReceiveNewRevocationKey xs16.4 SendSignedCommitment xs16.4 ReceiveSignedCommitment xs16.4 ReceiveSignedCommitmentDuringClosing xs16.4 RevokeAndAck xs16.4 ReceiveRevocationKey xs16.4 ReceiveRevocationKeyForTimedoutHTLC xs16.4 CloseChannel xs16.4 Cheat xs16.4 Punish xs16.4 NoteThatOtherPartyClosedHonestly xs16.4 NoteThatOtherPartyClosedButUnpunishable xs16.4 NoteThatOtherPartyClosedDishonestly xs16.4 NoteCommittedAndUncommittedAndPersistedHTLCs xs16.4 NotePunishedHTLCs xs16.4 UpdatePunishedHTLCs xs16.4 NoteAbortedHTLCs xs16.4 RedeemHTLCAfterClose xs16.4 NoteThatHTLCFulfilledOnChain xs16.4 NoteThatHTLCTimedOutOnChain xs16.4 WillPunishLater xs16.4 InitiateShutdown xs16.4 ReceiveInitiateShutdown xs16.4 IgnoreMessageDuringClosing xs16.4 NoteThatChannelClosedAndAllHTLCsRedeemed pvspace8.0pt x MultiHopMock_Next xs16.4 AddNewForwardedPayment xs16.4 ReceivePreimageForIncomingHTLC §.§ On the Formalization of <cit.> While working on the formalization of the Lightning Network's protocol in TLA+, we found the following two flaws in the formalization of <cit.>. While these flaws render the formalized protocol insecure, they are easy to fix and it seems that the security proof could work for the corrected protocol. The following references to figures and page numbers refer to the paper's version on ePrint <cit.>. The first flaw concerns the punishment of the publication of an outdated commitment transaction for which the protocol is specified in Fig. 37, lines 21-25 (page 64). A problem arises for example in the following situation: Before the current time, user Alice has sent an outgoing HTLC to user Bob. The HTLC was committed and has been fulfilled. Now, the HTLC's absolute timelock has passed. Now, Alice has an outdated commitment transaction that commits the HTLC and Alice has Bob's signature on the HTLC timeout transaction corresponding to that HTLC. Alice is malicious and publishes this outdated commitment transaction together with the HTLC timeout transaction which is valid because the HTLC's absolute timelock has passed. Bob runs the protocol specified in Fig. 37 and arrives at line 22. In line 22, a revocation transaction is created whose inputs spend all outputs of the outdated commitment transaction. In the situation described, such a revocation transaction cannot be valid because the HTLC output in the outdated commitment transaction is already spent. Instead of an input referencing the outdated commitment transaction's HTLC output, the revocation transaction must have an input that references the output of the HTLC timeout transaction. While the protocol as formalized in Fig. 37 is incorrect, the security proof on page 90 does not mention the case that a second-stage (timeout or success) HTLC transaction might have been published for an outdated commitment transaction and, thus, the protocol seems to be correct. While the protocol can be corrected by adding a specification of how such cases are handled, it is hard to detect such flaws by inspecting the proof manually. For a scenario that shows the impact of the second flaw, assume that in the payment channel between users Alice and Bob there is currently an unfulfilled HTLC for a payment from Alice to Bob. The HTLC's absolute timelock passes and the HTLC times out. Bob unilaterally closes the payment channel by publishing the latest commitment transaction. The commitment transaction contains an output for the HTLC with the spending method pt_rev,n+1 (pt_htlc, n+1, 𝙲𝚕𝚝𝚟𝙴𝚡𝚙𝚒𝚛𝚢 absolute) (pt_htlc, n+1 ph_htlc, n+1, on preimage of  h) (see Fig. 40, line 8) where pt are public keys for which Alice has the private key, ph are public keys for which Bob has the private key, and 𝙲𝚕𝚝𝚟𝙴𝚡𝚙𝚒𝚛𝚢 is the HTLC's absolute timelock. Now, Alice could spend the output of the commitment transaction corresponding to the HTLC by creating a transaction with an input that uses the disjunct (pt_htlc, n+1, 𝙲𝚕𝚝𝚟𝙴𝚡𝚙𝚒𝚛𝚢 absolute) because the absolute timelock has passed. Bob holds the HTLC success transaction that was signed by Alice with the private key corresponding to pt_htlc, n+1 (Fig. 43, line 13). If Bob has the preimage for the HTLC, Bob can add the preimage to the HTLC success transaction and can spend the HTLC's output in the commitment transaction using the disjunct (pt_htlc, n+1 ph_htlc, n+1, on preimage of  h) of the spending method. However, the HTLC success transaction is also valid without the preimage as it fulfills the conditions of the disjunct (pt_htlc, n+1, 𝙲𝚕𝚝𝚟𝙴𝚡𝚙𝚒𝚛𝚢 absolute) because the HTLC's absolute timelock has passed. Because Bob published his latest commitment transaction, Alice cannot revoke the transaction and this would result in Bob receiving the amount of the HTLC without releasing (or even without having) the preimage. One way to correct this problem is to use the possibility that the transaction model of the paper <cit.> allows an output to specify a list of spending conditions and an input spending this output to reference a specific spending condition. The correction would be to transform the disjunction in Fig. 40, line 8 into a list of spending methods and add the corresponding indices to the inputs in Fig. 43, line 13. Another way is taken by the Lightning Network's specification which uses in the output's spending method for a timeout the operator that verifies that a spending transaction has a certain timelock set (). As Bob's HTLC success transaction has the set to 0, the success transaction cannot fulfill this spending method. We found this flaw by model checking when we had a similar flaw in a draft of our formalization. We fixed the flaw in our formalization by modeling the field for transactions and adding a validity condition modeling the operator .
http://arxiv.org/abs/2307.02510v1
20230705100724
Mixed Leader-Follower Dynamics
[ "Hsin-Lun Li" ]
math.OC
[ "math.OC", "math.DS", "91C20, 91D25, 91D30, 93D50, 94C15" ]
Hsin-Lun Li Mixed Leader-Follower Dynamics Mixed Leader-Follower Dynamics Hsin-Lun Li H Li is with the Department of Applied Mathematics, National Sun Yat-sen University, Kaohsiung 80424, Taiwan, e-mail: [email protected]. June 2023 ========================================================================================================================================================= empty The original Leader-Follower (LF) model partitions all agents whose opinion is a number in [-1,1] to a follower group, a leader group with a positive target opinion in [0,1] and a leader group with a negative target opinion in [-1,0]. A leader group agent has a constant degree to its target and mixes it with the average opinion of its group neighbors at each update. A follower has a constant degree to the average opinion of the opinion neighbors of each leader group and mixes it with the average opinion of its group neighbors at each update. In this paper, we consider a variant of the LF model, namely the mixed model, in which the degrees can vary over time, the opinions can be high dimensional, and the number of leader groups can be more than two. We investigate circumstances under which all agents achieve a consensus. In particular, a few leaders can dominate the whole population. Hegselmann-Krause dynamics, Leader-Follower dynamics, consensus, e-commerce, dominance, target. § INTRODUCTION The Hegselmann-Krause (HK) model and the Deffuant model are two of the most popular mathematical models in opinion dynamics. The similarities between the two are that both consist of a finite number of agents and set a constant confidence threshold, therefore belonging to the bounded confidence model. <cit.> and <cit.> are some papers relating to the bounded confidence model. In both models, all agents have their opinion in 𝐑^d and two agents are opinion neighbors if their opinion distance does not exceed the confidence threshold. The original opinion space of both models is the closed interval between 0 and 1, namely [0,1]. The difference between the two is their interaction mechanisms. The former belongs to group interaction, whereas the latter belongs to pair interaction. The HK model is averaging dynamics, divided into two types: the synchronous HK model and the asynchronous HK model. The updated opinion of an agent is the average opinion of its opinion neighbors. All agents update their opinion simultaneously for the synchronous HK model and only one agent uniformly selected at random updates its opinion for the asynchronous model. <cit.> is a paper about the synchronous HK model and the asynchronous HK model. Some variants of the HK model were proposed, such as <cit.> and <cit.>, which belong to either the model with limited interactions or that with self-belief. The authors in <cit.> raised a nontrivial lower bound for the probability of consensus of the HK model with continuous time. The author in <cit.> proposed a variant of the HK model in which all agents can decide their degree of stubbornness and mix their opinion with the average opinion of their opinion neighbors, called the mixed HK model. The degree of stubbornness is a number between 0 and 1. The more stubborn an agent, the closer to 1 its degree of stubbornness, and vice versa. The mixed HK model is the synchronous HK model when all agents are absolutely open-minded, and the asynchronous HK model, when only one agent uniformly selected at random is absolutely open-minded and the rest are absolutely stubborn. The author showed circumstances under which a consensus can be achieved or asymptotic stability holds. The techniques include Perron-Frobenius for Laplacians in <cit.>, Courant-Fischer Formula in <cit.> and Cheeger's Inequality in <cit.>. <cit.> is the sequel to <cit.>. The mixed HK model is studied deterministically in <cit.> and nondeterministically in <cit.>. Apart from the HK model, all agents in the Deffuant model interact in pairs and there is a social relationship, such as a friendship, among some agents. Two agents are social neighbors if they have a social relationship. A pair of social neighbors are selected at each time step and interact if and only if their opinion distance does not exceed the confidence threshold. The author in <cit.> proposed the first proof of the main conjecture about the Deffuant model. An alternative proof to that conjecture is in <cit.>. <cit.>, <cit.>, <cit.> and <cit.> are works related to the Deffuant model. The authors in <cit.> raised a nontrivial lower bound, which does not depend on the size of all agents, for the probability of consensus of the Deffuant model. Some properties of the HK model and the Deffuant model are in common. For instance, all agents are opinion-connected to each other thereafter if they are opinion-connected to each other at some time step. The author in <cit.> blended a social relationship in the mixed HK model and argued that the mixed HK model includes group interaction and pair interaction, therefore containing the HK model and the Deffuant model. The Leader-Follower (LF) dynamics originated from the Hegselmann-Krause (HK) dynamics. The authors in <cit.> proposed the LF model that partitions agents whose opinion is in [-1,1] into a follower group, a leader group with a positive target opinion in [0,1] and a leader group with a negative target opinion in [-1,0]. Namely, all agents consist of three types of agents: followers, positive target leaders and negative target leaders. In the original HK model, all agents update their opinion in [0,1] by taking the average opinion of their opinion neighbors. In the LF model, all positive target leaders have a constant degree toward their positive target and mix the average opinion of their positive target neighbors with their positive target, all negative target leaders have a constant degree toward their negative target and mix the average opinion of their negative target neighbors with their negative target, and all followers have their constant degree toward the average opinion of their positive target neighbors and that toward the average opinion of their negative target neighbors and mix the average opinion of their follower neighbors with the average opinion of their positive target neighbors and the average opinion of their negative target neighbors. In a nutshell, the leader groups consider their group neighbors and their target, whereas the follower group follows others, therefore considering follower group neighbors, positive target group neighbors and negative target group neighbors. Interpreting mathematically, define [n]={1,2,…,n}. Say N agents including N_1 followers, N_2 positive target agents and N_3 negative target agents, set as [N_1], [N_1+N_2]-[N_1] and [N]-[N_1+N_2]. The mechanism is as follows: [ x_i(t+1)=1-α_i-β_i/|N_i^F(t)|∑_j∈ N_i^F(t)x_j(t); + α_i/|N_i^P(t)|∑_j∈ N_i^P(t)x_j(t); + β_i/|N_i^N(t)|∑_j∈ N_i^N(t)x_j(t), i=1,…,N_1,; x_i(t+1)=(1-w_i)/|N_i^P(t)|∑_j∈ N_i^P(t)x_j(t)+ w_i d,; i=N_1+1,…,N_1+N_2,; x_i(t+1)=1-z_i/|N_i^N(t)|∑_j∈ N_i^N(t)(t)x_j(t)+ z_i g,; i=N_1+N_2+1,…,N, ] where [ x_i(t) = opinion of agent i at time t,; d ∈ [0,1] is the positive target opinion,; g ∈ [-1,0] is the negative target opinion,; ϵ_i = confidence threshold of agent i,; N_i^F(t) = {j∈ [N_1]:x_i(t)-x_j(t)≤ϵ_i},; N_i^P(t) = {j∈ [N_1+N_2]-[N_1]:; x_i(t)-x_j(t)≤ϵ_i},; N_i^N(t) = {j∈ [N]-[N_1+N_2]:; x_i(t)-x_j(t)≤ϵ_i},; α_i = degree to the average opinion of agent i's; positive target neighbors,; β_i = degree to the average opinion of agent i's; negative target neighbors,; w_i = degree to the positive target of agent i,; z_i = degree to the negative target of agent i,; α_i, β_i, w_i, z_i ∈ [0,1]. ] The authors in <cit.> pointed out that it can be an application in e-commerce. In this paper, we consider a variant of the LF model, namely the mixed LF model. The differences are: * opinions can be high dimensional, * the number of leader groups can be more than two, * the degree of a leader group agent to its group target and the degree of a follower group agent to the average opinion of each of its leader group neighbors can vary over time, * only one confidence threshold, ϵ. In the leader group, each agent can choose its degree to the target opinion and mix its opinion with the average opinion of its group neighbors at each update. In the follower group, each agent can choose its degree to the average opinion of its neighbors in each leader group and mix its opinion with the average opinion of its group neighbors at each update. In a nutshell, leader group members only can interact in its group, whereas follower group members can interact out of its group. Considering a finite set of agents partitioned to a follower group, F, and m leader groups, L_1, …, L_m, the mixed model is as follows: [ x_i(t+1)=α_i^k(t)/|N_i^L_k(t)|∑_j∈ N_i^L_k(t)x_j(t)+(1-α_i^k(t))g_k,; i∈ L_k,; x_i(t+1)=(1-∑_k=1^mβ_i^k(t))/|N_i^F(t)|∑_k∈ N_i^F(t)x_k; +∑_k=1^mβ_i^k(t)/|N_i^L_k(t)|∑_k∈ N_i^L_k(t)x_k, i∈ F, ] where [ α_i^k(t) ∈ [0,1] is the degree to the average opinion of; agent i's group neighbors,; g_k ∈ 𝐑^𝐝 is the target opinion of leader group k,; N_i^F(t) = {j∈ F:x_j(t)-x_i(t)≤ϵ},; N_i^L_k(t) = {j∈ L_k:x_j(t)-x_i(t)≤ϵ},; β_i^k(t) = degree to the average opinion of agent i's; neighbors in leader group k,; β_i^k(t) ∈ [0,1] and equals 0 if N_i^L_k(t)=∅ . ] Observe that the system of leader group k becomes a synchronous HK model if α_i^k(t)=1 for all i∈ L_k, and the follower group system becomes a synchronous HK model if β_i^k(t)=0 for all i∈ F and k∈ [m]. The differences between HK dynamics and LF dynamics are: * HK dynamics have only one group and one interaction rule, whereas LF dynamics has two types of groups, the leader group and the follower group, in which each has its interaction rule, * the update of an agent in HK dynamics only depends on its neighbors, whereas the update of an agent in LF dynamics may also depend on a target. § MAIN RESULTS The next theorem shows circumstances under which all agents in a leader group achieve their target. It turns out that the degree to their target, 1-α_i(t), can be very small to achieve their target for all agents in L. Assume that lim sup_t→∞max_i∈ Lα_i(t)<1. Then, lim_t→∞max_i∈ Lx_i(t)-g=0. In particular, lim_t→∞x_i(t)-g=0 if lim_t→∞α_i(t)=0 for some i∈ L. Next, we investigate circumstances under which all agents achieve a consensus. It turns out that a consensus can be achieved as long as their opinion lies in B(g,δ) for some δ<ϵ at some time step and the minimum degree of all leader group agents to their goal and the minimum degree of all followers to the average opinion of their leader neighbors have a lower bound after some time step. Assume that {x_i(t)}_i∈ L∪ F⊂ B(g,ϵ) for some t≥ 0 and that sup_s≥ t{max_i∈ F(1-β_i(s)), max_i∈ Lα_i(s)}<1. Then, lim_t→∞max_i∈ L∪ Fx_i(t)-g=0. Considering an LF system consisting of a follower group and m leader group, we have the following corollaries. Given that the opinions of all agents and all targets fall in some open ball centered at some target of radius ϵ and that the maximum tendency of all leaders toward their leader group opinion neighbors and all followers toward their follower group opinion neighbors is less than some constant less than 1 after some time step. Then, the distance between each agent's opinion and its convex combination of all targets approaches 0 over time. Assume that {x_i(t), g_k}_i∈ (⋃_k=1^m L_k)∪ F, k∈ [m]⊂ B(g_j,ϵ) for some j∈[m] and t≥ 0 and that sup_s≥ t{max_i∈ F(1-∑_k=1^mβ_i^k(s)), max_k∈ [m]max_i∈ L_kα_i^k(s)}<1. Then, lim_t→∞max_i∈ (⋃_k=1^m L_k)∪ Fx_i(t)- ∑_k=1^mβ_i^k(t) g_k/∑_j=1^mβ_i^j(t)=0. In particular for all i∈ (⋃_k=1^m L_k)∪ F, lim_t→∞x_i(t)= ∑_k=1^mβ_i^k g_k/∑_j=1^mβ_i^j if β_i^k(t)→β_i^k as t→∞ for all k∈[m]. Given that all leader groups are independent systems in which all followers belong to one of them and that the maximum tendency of all leaders toward their leader group opinion neighbors and all followers toward their follower group opinion neighbors is less than some constant less than 1 after some time step, then all systems achieve their target over time. Assume that min_i,j∈ [m]g_i-g_j>3ϵ, (x_i(t))_i∈ L_k⊂ B(g_k,ϵ) for some t∈𝐍 and for all k∈ [m] and x_j(t)∈ B(g_k,ϵ) for all j∈ F and for some k∈ [m], and that sup_s≥ t{max_i∈ F(1-∑_k=1^mβ_i^k(s)), max_k∈ [m]max_i∈ L_kα_i^k(s)}<1. Then, all systems eventually achieve their target. § THE MODEL We first investigate the behavior of a leader group. Let L be a leader group and g be its target. It turns out that the maximum distance between its opinions and target, max_i∈ Lx_i(t)-g, is nonincreasing over time. We have x_i(t+1)-g≤α_i(t)max_j∈ N_i(t)x_j(t)-g for all i∈ L. In particular, max_i∈ Lx_i(t+1)-g≤max_i∈ Lα_i(t)max_i∈ Lx_i(t)-g. Observe that x_i(t+1)-g =α_i(t)/|N_i(t)|∑_j∈ N_i(t)[x_j(t)-g] ≤α_i(t)max_j∈ N_i(t)x_j(t)-g. Therefore, max_i∈ Lx_i(t+1)-g≤max_i∈ Lα_i(t)max_i∈ Lx_i(t)-g. Since lim sup_t→∞max_i∈ Lα_i(t)<1, there is (t_k)_k≥0⊂𝐍 strictly increasing such that max_i∈ Lα_i(t_k)≤δ<1 for some δ. For all t≥1, t_s< t≤ t_s+1 for some s∈𝐍. Via Lemma <ref>, max_i∈ Lx_i(t)-g ≤max_i∈ Lα_i(t-1)…max_i∈ Lα_i(t_0)max_i∈ Lx_i(t_0)-g ≤δ^s+1max_i∈ Lx_i(t_0)-g. As t→∞, s→∞. Therefore, lim sup_t→∞max_i∈ Lx_i(t)-g≤ 0. Hence, lim_t→∞max_i∈ Lx_i(t)-g=0. Also, due to Lemma <ref> and max_j∈ Lx_j(t)-g bounded over time, lim sup_t→∞x_i(t+1)-g≤ 0 therefore lim_t→∞x_i(t)-g=0. In other words, all leader group agents achieve their target as long as the minimum degree to their target has a lower bound larger than 0 infinitely many times. In particular, a leader group agent achieves the group target if its degree to the target approaches 1 over time. Now, considering an LF system consisting of a follower group F and a leader group L, let B(c,r) be an open ball centered at c of radius r. The convex hull generated by v_1,…,v_n, C({v_1,…,v_n}), is the smallest convex set containing v_1,…,v_n, i.e., C({v_1,…,v_n})={∑_i=1^n a_i v_i:(a_i)_i=1^n is stochastic}. Observe that x_i(t+1)∈ C({x_k(t)}_k∈ L∪ F∪{g}) for all i∈ L∪ F; therefore we have the following lemma. For all δ≥ 0, if {x_k(t)}_k∈ L∪ F⊂ B(g,δ), then {x_k(t+1)}_k∈ L∪ F⊂ B(g,δ). Via Theorem <ref>, max_k∈ Lx_k(s)-g<δ for all δ>0, for some p≥ t and for all s≥ p. For all s≥ p, δ>0, i∈ F and j∈ L, via Lemma <ref>, we have x_i(s)-x_j(s)≤x_i(s)-g+g-x_j(s)< ϵ+δ therefore x_i(s)-x_j(s)≤ϵ and N_i^L(s)=L. Let α_t=max_k∈ Lα_k(t), β_t=max_k∈ F(1-β_k(t)), γ=sup_s≥ t{β_s, α_s}, A_t=max_k∈ Fx_k(t)-g and C_t=max_k∈ Lx_k(t)-g. By Lemma <ref> and the triangle inequality, for all i∈ F and t> p, A_t+1≤ β_t A_t+ C_t ≤ β_tβ_t-1…β_p A_p+β_t…β_p+1C_p+…+β_t C_t-1 + C_t ≤ γ^t-p+1A_p+(t-p+1)γ^t-pC_p therefore lim sup_t→∞A_t+1≤ 0. Via Theorem <ref>, we are done. § CONCLUSION All agents of a leader group achieve their consensus as long as there are infinitely many times that the most stubborn are not too stubborn. For a leader group and a follower group, all agents eventually achieve the leader group's target if all agents' opinion is within ϵ distance from the leader group's target and all agents are not too stubborn. In particular, a few leaders can dominate the whole population. § SIMULATIONS For simulations of Theorem <ref>, considering a leader group of size 100, all opinions uniform on [-1,0], confidence threshold equal to 0.05, target equal to 1 and all agents' tendencies to their leader group opinion neighbors having 1/3 of chance being 0.99 and 1 otherwise. Then, all leader group agents achieve their target eventually as Figure <ref> shows. For simulations of Theorem <ref>, considering a leader group of size 2 with opinions 0.99 and -0.99, a follower group of size 100 with opinions uniform on [0.5, 1], confidence threshold equal to 1, the target of the leader group equal to 0 and the tendencies of all leader group agents toward their leader group opinion neighbors and that of all followers toward their follower group opinion neighbors are 0.99 at all times. Then, all agents achieve the target eventually as Figure <ref> shows. § ACKNOWLEDGMENT The author is funded by the National Science and Technology Council. 10 deffuant2000mixing F. Amblard, G. Deffuant, D. Neau, and G. Weisbuch. Mixing beliefs among interacting agents. Advances in Complex Systems, 3(01n04):87–98, 2000. etesami2015game T. Basar and S. R. Etesami. Game-theoretic analysis of the Hegselmann-Krause model for opinion dynamics in finite dimensions. IEEE Transactions on Automatic Control, 60(7):1886–1897, 2015. beineke2004topics L. W. Beineke, P. J. Cameron, R. J. Wilson, et al. Topics in algebraic graph theory, volume 102. Cambridge University Press, 2004. bernardo2021heterogeneous C. Bernardo, R. Iervolino, and F. Vasca. Heterogeneous opinion dynamics with confidence thresholds adaptation. IEEE Transactions on Control of Network Systems, 2021. vasca2021practical C. Bernardo, R. Iervolino, and F. Vasca. Practical consensus in bounded confidence opinion dynamics. Automatica, 129:109683, 2021. bhattacharyya2013convergence A. Bhattacharyya, M. Braverman, B. Chazelle, and H. L. Nguyen. On the convergence of the Hegselmann-Krause system. In Proceedings of the 4th conference on Innovations in Theoretical Computer Science, pages 61–66, 2013. biyikoglu2007laplacian T. Biyikoglu, J. Leydold, and P. F. Stadler. Laplacian eigenvectors of graphs: Perron-Frobenius and Faber-Krahn type theorems. Springer, 2007. chen2020convergence F. Bullo, G. Chen, W. Mei, and W. Su. Convergence properties of the heterogeneous Deffuant–Weisbuch model. Automatica, 114:108825, 2020. fortunato2005consensus S. Fortunato. On the consensus threshold for the opinion dynamics of Krause–Hegselmann. International Journal of Modern Physics C, 16(02):259–270, 2005. parasnis2018hegselmann M. Franceschetti, R. Parasnis, and B. Touri. Hegselmann-Krause dynamics with limited connectivity. In 2018 IEEE Conference on Decision and Control (CDC), pages 5364–5369. IEEE, 2018. fu2015opinion G. Fu, Z. Li, and W. Zhang. Opinion dynamics of modified Hegselmann–Krause model in a group-based population with heterogeneous bounded confidence. Physica A: Statistical Mechanics and its Applications, 419:558–565, 2015. MR4058367 N. Gantert, M. Heydenreich, and T. Hirscher. Strictly weak consensus in the uniform compass model on ℤ. Bernoulli, 26(2):1269–1293, 2020. MR2915577 O. Häggström. A pairwise averaging procedure with application to consensus formation in the Deffuant model. Acta Appl. Math., 119:185–201, 2012. MR3164772 O. Häggström and T. Hirscher. Further results on consensus formation in the Deffuant model. Electron. J. Probab., 19:no. 19, 26, 2014. hegselmann2002opinion R. Hegselmann, U. Krause, et al. Opinion dynamics and bounded confidence models, analysis, and simulation. Journal of artificial societies and social simulation, 5(3), 2002. MR3265084 T. Hirscher. The Deffuant model on ℤ with higher-dimensional opinion spaces. ALEA Lat. Am. J. Probab. Math. Stat., 11(1):409–444, 2014. MR3694315 T. Hirscher. Overly determined agents prevent consensus in a generalized Deffuant model on ℤ with dispersed opinions. Adv. in Appl. Probab., 49(3):722–744, 2017. horn2012matrix R. A. Horn and C. R. Johnson. Matrix analysis. Cambridge university press, 2012. MR3069370 N. Lanchier. The critical value of the Deffuant model equals one half. ALEA Lat. Am. J. Probab. Math. Stat., 9(2):383–402, 2012. lanchier2020probability N. Lanchier and H.-L. Li. Probability of consensus in the multivariate Deffuant model on finite connected graphs. Electronic Communications in Probability, 25:1–12, 2020. lanchier2022consensus N. Lanchier and H.-L. Li. Consensus in the Hegselmann–Krause model. Journal of Statistical Physics, 187(3):1–13, 2022. mHK H.-L. Li. Mixed Hegselmann-Krause dynamics. Discrete and Continuous Dynamical Systems - B, 27(2):1149–1162, 2022. mHK2 H.-L. Li. Mixed Hegselmann-Krause dynamics II. Discrete and Continuous Dynamical Systems - B, 28(5):2981–2993, 2023. lorenz2005stabilization J. Lorenz. A stabilization theorem for dynamics of continuous opinions. Physica A: Statistical Mechanics and its Applications, 355(1):217–223, 2005. lorenz2007continuous J. Lorenz. Continuous opinion dynamics under bounded confidence: A survey. International Journal of Modern Physics C, 18(12):1819–1838, 2007. proskurnikov2017tutorial A. V. Proskurnikov and R. Tempo. A tutorial on modeling and analysis of dynamic social networks. part i. Annual Reviews in Control, 43:65–79, 2017. zha2020opinion Q. Zha, G. Kou, H. Zhang, H. Liang, X. Chen, C.-C. Li, and Y. Dong. Opinion dynamics in finance and business: a literature review and research opportunities. Financial Innovation, 6(1):1–22, 2020. zhao2018understanding Y. Zhao, G. Kou, Y. Peng, and Y. Chen. Understanding influence power of opinion leaders in e-commerce networks: An opinion dynamics theory perspective. Information Sciences, 426:131–147, 2018.
http://arxiv.org/abs/2307.02922v1
20230706112243
Strong coupling and the C=O vibrational bond
[ "William Leslie Barnes" ]
physics.chem-ph
[ "physics.chem-ph" ]
§ STRONG COUPLING AND THE C=O VIBRATIONAL BOND William L. Barnes Department of Physics and Astronomy, University of Exeter, Exeter, EX4 4QL, United Kingdom email: [email protected] August 1, 2023 § ABSTRACT In this technical note we calculate the strength of the expected Rabi splitting for a molecular resonance. By way of an example we focus on the molecular resonance associated with the C=O bond, specifically the stretch resonance at ∼1730 cm^-1. This molecular resonance is common in a wide range of polymeric materials that are convenient for many experiments, because of the ease with which they may be spin cast to form optical micro-cavities, polymers include PVA <cit.> and PMMA <cit.>. Two different approaches to modelling the expected extent of the coupling are examined, and the results compared with data from experiments. The approach adopted here indicates how material parameters may be used to assess the potential of a material to exhibit strong coupling, and also enable other useful parameters to be derived, including the molecular dipole moment and the vacuum cavity field strength. § INTRODUCTION The strength of the interaction of N molecular resonators and a cavity mode, g_N, is based on the electric dipole interaction. We consider molecular resonances that involve electric dipole transitions, at angular frequency ω_0 and that have a transition dipole moment μ. It is the interaction of this dipole moment with the cavity vacuum field, E_vac that we are interested in, no external source of light is involved <cit.>. The interaction energy for our electric dipole is given by μ.E_vac (where we have assumed that the dipole moment and field are aligned). The (RMS) strength of the vacuum field is √(ħω_0/2Vε_0ε_b), where ε_b is the background permittivity of the molecular material, and V is the volume of the cavity mode <cit.>. To make contact with the literature it is convenient to write the interaction energy for N molecules in a cavity as ħΩ_R and, since we often wish to know how far from the original molecular resonance energy ω_0 the two polaritons ω_± are, we write Ω_R=ω_+-ω_-=2g_N. We need one extra piece of information, the interaction energy scales as the square root of the number of dipoles (molecules) involved <cit.>, i.e. it scales with √(N). We thus have, Ω_R = 2 g_N = 2/ħ√(N)μ E_vac = 2/ħ√(N)μ √(ħω_0/2V_m ε_0 ε_b) so that the material parameters we need to calculate the Rabi splitting are the dipole moment and frequency associated with the molecular transition, μ, ω_0, the concentration of the molecular resonators, √(N/V), and the background permittivity of the host in which the molecules are embedded, ε_b; note we have assumed that the molecules fill the volume of the cavity mode. In addition, if we will probably want to see whether the coupling strength dominates over the molecular and cavity (de-phasing) decay rates (γ_M and γ_C respectively), i.e. whether, g_N>γ_C,γ_M, so that we will need to determine γ_M and γ_C. Out of curiosity, we may also want to evaluate the mode volume V, the quality factors of the molecular and optical resonances, Q_M and Q_C respectively, and the vacuum field strength E_vac. Looking at equation (<ref>) we can see that to evaluate the interaction strength g_N we will need to determine: the dipole moment μ; the resonance frequency ω_0; the concentration N/V of molecular resonators, and and the refractive index of the host medium, n_b. Below we look at two ways of accomplishing this. § THE PARAMETERS WE NEED Let us look at the essentials first, i.e. the number density of C=O bonds, the C=O bond stretch resonance frequency, and the associated dipole moment. From easiest to hardest these are: §.§ Resonance frequency, ω_0. The resonance frequency can easily be determined from IR transmission measurements, and is found to be <cit.> ∼ 1734 cm^-1, which is equivalent to a wavelength of 5.8 μm, and an angular frequency of 3.26×10^14 rad s^-1. We will look at the IR transmission spectrum in more detail below. §.§ Number density, √(N/V). Shalabney et al. use the polymer PVA_C, for which the density is <cit.> 1.19 g cm^-3. Thus, 1 cm^3 contains 1.19 g of the polymer. The molecular weight of PVA_C is 86 <cit.>, so that 1 cm^3 contains 1.19/86 moles. Since each repeat unit contains one C=O bond, the number of C=O bonds per cm^3 is thus 8.33×10^21, so that the density of bonds is 8.33×10^27 per m^3. We assume these numbers also apply to the spun films used in the strong coupling experiments. §.§ Dipole moment, μ. §.§.§ dipole moment from Lorentz oscillator model The first approach to evaluate the dipole moment we use here is to fit a Lorentz oscillator (LO) model for the permittivity of the PVA_C to the measured transmission spectrum. The LO model is incorporated into a Fresnel-based calculation of the transmittance, the sample consisting of the spun polymer film on top of a substrate, see fig <ref>. A convenient `know nothing in advance' formulation of the Lorentz oscillator permittivity is, ε(ω) = ε_b+f'ω_0^2/ω_0^2-ω^2-iγ_MDω, where ε(ω) is the frequency dependent permittivity, ε_b is a constant across the frequency range of interest (it represents the contribution to the permittivity of resonances in other spectral regions). The molecular damping rate is γ_MD (γ_MD corresponds to the full width at half maximum of the resonance), note that the de-phasing rate and the damping rate are related through, γ_M=γ_MD/2. The reduced oscillator strength of the transition is f'. For the Fresnel calculation of the transmittance we also need the thickness of the polymer layer, and the refractive index of the substrate. The dipole moment μ is related to the oscillator strength, f, by <cit.>, f = 2 m_e ω_0/3 ħ e^2|μ|^2, so that if we can find f then we can use (<ref>) to find the dipole moment. (The difference between f' and f is discussed below.) The experimental information we have to work with is the IR transmission spectrum. Shalabney et al. provide such a spectrum for a film of PVA_C on a Germanium substrate (for which we take the permittivity to be ε_Ge=16.0, the superstrate is air). In assessing the transmittance data we need to take account of the reflection that occurs at the substrate/air interface, to simulate the experimental data we thus need to multiply our Fresnel-calculated data by 0.62 to take account of the transmittance of this interface [Note that to normalize their data Shalabney et al. use a bare Ge substrate. This gives two air/Ge interfaces rather than one polymer/Ge interface and one Ge/air interface. There may also be some scattering, e.g. if the Ge was not polished on both sides. The result is that their data need to be scaled somewhat, a rare instance of adjusting the data rather than the model. The formula used for the Transmission of the Ge/air interface is T=1-R=1-((n_Ge-1)/(n_Ge+1))^2=0.64]. In the experiment the overall transmittance of the sample was measured. Figure <ref> shows the transcribed experimental data from Shalabney et al. (red data points) together with the result of a Fresnel-based calculation for the transmittance, where the parameters of the Lorentz oscillator model for the C=O bond have been varied to provide a reasonable `by eye' fit to the experimental data. The inset shows the spectra in the neighbourhood of the C=O stretch, the main part of the figure covers a wider energy range. In the main figure there is a gentle periodic modulation of transmittance, arising from interference due to the two surfaces of the polymer film, which are separated by d=1.70 μ m, see fig 1a in <cit.>. The parameters used in the model are given in table table <ref> below. Now, although equation (<ref>) is convenient, it is not appropriate to use in finding the dipole moment because f' does not match the physical origin of the Lorentz oscillator model correctly. Instead we should use, ε(ω) = ε_b+fω_p^2/ω_0^2-ω^2-iγ_MDω, where, ω_p = √(N e^2/Vε_0 m_e), with e the electronic charge and m_e the electron mass. Here N/V is the number density of C=O bonds. Note that ω_p is not a plasma frequency, there is no plasma in the PVA_C, rather we should think of it as short hand for the rhs of equation (<ref>). Shalabney et al. use a third version, ε(ω) = ε_b+f_k/k_0^2-k^2-iγ_MDk k, where now the parameters are given in terms of wavenumber, (cm^-1). We will now find f and γ and will also find f_k and γ_MDk, so that we may make a comparison between our results and those of Shalabney et al.. Comparing equations (<ref>) and (<ref>) we have, f'ω_0^2 = f ω_p^2. From the `fit' shown in figure <ref> we note that we used ω_0=3.26× 10^14 rad s^-1 and f'=0.018, and from this we can calculate ω_p via (<ref>) using the bond density discussed earlier, i.e., N/V=8.0 × 10^27 m^-3. Doing so we find ω_p=5.1× 10^15, so that f=0.7 × 10^-4. The background permittivity was taken to be ε_b=1.99 <cit.>. Next, noting that to convert rad s^-1 to cm^-1 we divide by 1.88 × 10^11, we find, f_k = 54 × 10^3, k_o=1734 cm^-1, and γ_MDk=16 cm^-1. In table <ref> we bring all of these data together, including the data from Shalabney et al., i.e. their results from fitting their own data. To a reasonable approximation the results from the modelling described above agree with those of Shalabney et al. Now let us use equation (<ref>) to find the dipole moment, we have from equation (<ref>), |μ|^2 = 3 ħ e^2/2 m_e ω_0f. Introducing the numerical values we find |μ|=0.97× 10^-30 Cm = 0.29 D [where 1 Debeye (D) = 3.3 × 10^-30 Cm]. This is in the range of vaules given by Grechko and Zanni for various materials <cit.>, note that Shalabney et al. state a value for the dipole moment of 1D. §.§.§ Dipole moment from molar absorption model Chemists frequently make use of extinction measurements, just as we have done above. However, rather than trying to model the extinction (or equivalently the transmittance), chemists usually adopt a different approach based on the molar absorption coefficient, ϵ. The optical measurement is usually carried out on a solution of the molecules of interest, of concentration C_m, held in a sample chamber (cuvette) that provides a path length l. The molar absorption coefficient[Note that this is often called the molar extinction coefficient, but molar absorption coefficient is the correct term, see <cit.>.] is given by <cit.>, ϵ = log_10(I_0/I_t)1/l C_m. I_0 and I_t are the incident and transmitted intensities, and we can write for the transmittance T, I_0/I_t=1/T. As ever in calculating numerical quantities, using the correct units is vital. For a physicist SI units seem obvious, but they are not the ones that chemists use here. Instead, the units for the path length, l, are cm, whilst the units for the molecular concentration, C_m, are moles per dcm^-3, i.e. moles per litre. Turro(see <cit.> equation 5.10) and Valeur and Berberan-Santos (see <cit.> equation 2) give the oscillator strength as[Note that <cit.> uses wavenumber (cm^-1), whilst <cit.> uses Hz, we use the wavenumber (cm^-1) version here since this is convenient in the infrared], f = 4.32 × 10^-9∫ϵ(ν̅) dν̅, where ν̅ is wavenumber in cm^-1 and, using <cit.> equation 5.40, this can often be approximated as, f ≈ 4.32 × 10^-9ϵ_max δν̅, where δν̅ is the width (FWHM) of the extinction feature in wavenumbers (cm^-1) [For an alternative derivation based on the Einstein coefficients, see <cit.>.]. We note that the oscillator strength found using equations (<ref>) and (<ref>) is less than the oscillator strength employed in the Lorentz oscillator model, by a factor of n_b. This is associated with the change in energy density of the light in the molecular material (c.f. air), see equation 9.29 in <cit.>. As a result the oscillator strength derived from extinction-type measurements should instead be written as, f/n_b = 4.32 × 10^-9∫ϵ(ν̅) dν̅≈ 4.32 × 10^-9ϵ_max δν̅. For the C=O data of Shalabney et al. (see figure <ref>), ϵ_max corresponds to T=0.045/0.45=1/10, and the path length is l=1.7 × 10^-4 cm. For the molecular concentration, C_m, 1 cm^3 contains 1.19/86 moles, so that there are 13.5 mol dm^-3, i.e. C_m=13.5 mol dm^-3. Bringing these factors together we can use (<ref>) to evaluate ϵ_max and find ϵ_max=436. Finally we can use δν̅ ≈ 30 cm^-1 (estimated from figure <ref>) to calculate f via equation (<ref>), we find f=7.3×10^-5. This compares with the 7.0×10^-5 we obtained earlier, see table <ref>, surprisingly similar given the rather crude approximations we have made. The associated dipole moment is μ = 0.30 D. § THE `DERIVED' PARAMETERS §.§ the coupling strength We can now look at the coupling strength and compare it to the decay rates to see if the strong coupling regime applies. From equation (<ref>) we can calculate the value of g_N for the C=O bond in PVA_C, doing so we find g_N=81 cm^-1 using parameters from the LO model (section <ref>), and g_N=82 cm^-1 using parameters directly from the extinction data (section <ref>). Note that in using equation (<ref>) we need to divide the right-hand side by √(3), see footnote [We divide the right-hand side by √(3) since here we are assuming that the dipole moments associated with all of the C=O bonds are randomly alighted with respect to the cavity field, i.e. that the C=O bonds are randomly oriented in the spun polymer film. This difference between random and aligned is the origin of the factor of 3 in the denominator of (<ref>).]. §.§ the strong coupling condition Recall that for strong coupling we require the ensemble coupling strength, g_N, to be greater than the molecular and cavity de-phasing rates, i.e. we require g_N>γ_M,γ_C. From table <ref> we see that, in cm^-1, the molecular de-phasing rate is γ_Mk=γ_MDk/2 = 8 cm^-1. For the cavity resonance Shalabney et al. give the damping rate via the FWHM of their cavity resonance, as measured in transmission, to be 140 cm^-1 (17 meV), so that γ_Ck=γ_CDk/2 = 70 cm^-1. Thus the coupling strength, g_N ∼80 cm^-1, exceeds the damping rates, confirming that the strong coupling regime has been reached. It is interesting to note that the measured Rabi splitting in the experiment of Shalbney et al. is ∼170 cm^-1, implying a value for g_N of ∼85 cm^-1. § MODE WIDTHS §.§ A cautionary tale An element of confusion was encountered in carrying out some of the analysis described here that may have implications for looking at a range of published strong coupling data. The problem has to do with the width of the transmittance minimum, it is not the same as the width that goes into the Lorentz oscillator model. This is illustrated in Figure <ref> where a closer zoom-in of the transmittance data shown in figure <ref> is given. In addition, the imaginary part of the PVA permittivity, and the imaginary part of the PVA index have been added. The imaginary part of the PVA permittivity has a FWHM of 16 cm^-1, whilst the transmittance dip has a FWHM, estimated from figure <ref>, of nearly 30 cm^-1. Taking the measured transmittance width as the FWHM to be used in the Lorentz oscillator model is a mistake, the measured transmittance width is instead the width that is needed when making use of the extinction data to evaluate the oscillator strength. Reader beware! §.§ Mode widths and Q factors For completeness, let us calculate the relevant Q factors. For the molecular resonance this is Q_M=ω_0/γ_MD = 1734/16 cm^-1 = 108. For the cavity resonance it is Q_M=ω_0/γ_CD = 1734/140 cm^-1 = 12. §.§ Cavity volume, vacuum field strength, and number of molecules coupled It may also be useful to look at the vacuum field strength. A reasonable approximation can be found by estimating the mode volume V, and using the well-known experssion for the vacuum field strength <cit.>, E_RMS=√(ħω_0/2 ε_0 ε_b V). For the mode volume, to a reasonable approximation this is dπ r_eff^2 where d is the cavity thickness, and r_eff is the radius of the mode, this in turn is determined by the width of the mode in terms of in-plane wavevector  <cit.> as, r_eff=λ/4 n_bπ√(R)/1-R, where R is the reflectance (intensity reflection coefficient) of the cavity mirrors. A Fresnel multilayer calculation can be used to work out R for the upper and lower mirrors. The mirrors are 10 nm thick gold (The permittivity of gold at this wavelength was estimated to be ε_Au = -1000 + 200i, based on data in  <cit.>). For the upper mirror the three media are PVA/Au/air, for the lower mirror they are PVA/Au/Ge. Values of R_upper = 0.85 and R_lower = 0.8 were found. Taking an average value of ∼ 0.8 we find r_eff = 7.3 μm so that the mode volume is V=dπ r_eff^2 = 300 μm^3. We can now calculate the vacuum field strength with the help of equation (<ref>), we find the field strength to be E_RMS = 1.7 × 10^3 Vm^-1. (Note that Shalabney et al. took their mode volume to be V=(λ/n_b)^3 = 70 μm^3). Finally we can estimate the number of C=O bonds involved from the density (8.3 × 10^27 m^-3) and the mode volume (300 × 10^-18 m^3) as N = 2 × 10^12. It is now clear that the single molecule coupling strength, g_N/√(N) = 1 × 10^-4 cm^-1 is much much smaller than the de-phasing rates, strong coupling here really is a collective effect. § CONCLUSION We have shown how two relatively simple models can be used to understand the extent of the Rabi-splitting observed in vibrational strong coupling experiments. The analysis has been presented in a somewhat tutorial style with the aim of giving those entering the field an easy entry point in trying to link the extent of the Rabi-splitting with bulk material parameters. In addition, the analysis presented here allows a number of related parameters to be evaluated, notably the dipole moment, the oscillator strength, the vacuum field strength and the cavity mode volume of planar Fabry-Perot-type cavities. § ACKNOWLEDGEMENTS The author is grateful to Marie Rider and Kishan Menghrajani for valuable discussions and to the European Research Council for funding Project Photmat (ERC-2016-AdG- 742222: www.photmat.eu). Note that this study did not generate any new data. iopart-num
http://arxiv.org/abs/2307.01670v1
20230704120456
Electromagnetic gyrokinetic instabilities in the Spherical Tokamak for Energy Production (STEP) part I: linear physics and sensitivity
[ "Daniel Kennedy", "Maurizio Giacomin", "Francis J Casson", "David Dickinson", "William A Hornsby", "Bhavin S Patel", "Colin M Roach" ]
physics.plasm-ph
[ "physics.plasm-ph" ]
Electromagnetic gyrokinetic instabilities in STEP part I]Electromagnetic gyrokinetic instabilities in the Spherical Tokamak for Energy Production (STEP) part I: linear physics and sensitivity ^1Culham Centre for Fusion Energy, Abingdon OX14 3DB, United Kingdom ^2York Plasma Institute, University of York, York, YO10 5DD, United Kingdom [email protected] We present herein the results of a linear gyrokinetic analysis of electromagnetic microinstabilites in the conceptual high-β, reactor-scale, tight-aspect-ratio tokamak STEP (Spherical Tokamak for Energy Production, <https://step.ukaea.uk>). We examine a range of flux surfaces between the deep core and the pedestal top for the two candidate flat-top operating points of the prototype device (EC and EBW operating points). Local linear gyrokinetic analysis is performed to determine the type of microinstabilities that arise under these reactor-relevant conditions. We find that the equilibria are dominated by a hybrid version of the Kinetic Ballooning Mode (KBM) instability at ion binormal and radial scales, with collisional Microtearing Modes (MTMs) sub-dominantly unstable at very similar binormal scales but different radial scales. We study the sensitivity of these instabilities to physics parameters, and discuss potential mechanisms for mitigating them. The results of this investigation are compared to a small set of similar conceptual reactor designs in the literature. A detailed benchmark of the linear results is performed using three gyrokinetic codes; alongside extensive resolution testing and sensitivity to numerical parameters providing confidence in the results of our calculations, and paving the way for detailed nonlinear studies in a companion article. § INTRODUCTION Magnetically confined fusion is promising as a future power source. However, the viability of fusion power plants is strongly influenced by how well the thermal energy can be confined in the plasma. Often, the dominant process governing confinement is microinstability-driven plasma turbulence. The beneficial impacts of equilibrium geometry on the microstability properties of spherical tokamaks (STs) <cit.> were uncovered in early studies motivated by START, MAST and NSTX <cit.>; favorable magnetic drifts, allied with higher radial pressure gradients in STs, were found capable of suppressing some of the drift-wave instabilities that drive anomalous transport in other devices <cit.>. Furthermore, in experiments with tangential NBI, the compact nature of the ST leads to high toroidal flows that can act to suppress turbulence, especially at ion Larmor scales (see recent review of transport and confinement in STs <cit.> and references therein), though it is anticipated that an ST power plant will have minimal momentum input and only modest externally driven flow. On the other hand, the higher trapping fraction in STs contributes to an increased drive for trapped electron modes (TEMs) at high density gradients, though this drive is mitigated if the magnetic drifts are favourable <cit.>. In addition, the high β (the ratio of thermal pressure to magnetic pressure) accessible in STs (e.g., <cit.>) can drive electromagnetic modes unstable, which can significantly increase the core turbulent transport. It is precisely these aforementioned electromagnetic instabilities which we expect to dominate transport in high-β, reactor-scale, tight-aspect-ratio tokamaks such as STEP (Spherical Tokamak for Energy Production) <cit.>. To be economically competitive, ST power plant designs such as STEP require a high β, which further necessitates a high β^' (the radial gradient of pressure) in a compact device. As a result, we also require sufficiently low turbulent transport in order to sustain these steep gradients and thus to maximise the self-driven bootstrap current and reduce the need for external current drive in a steady state device. In plasmas such as STEP where β and β^' are sufficiently high, the curvature of the confining magnetic field and the plasma kinetic gradients can excite electromagnetic instabilities such as kinetic ballooning modes (KBMs) and microtearing modes (MTMs). The KBM is driven by electrons and ions at binormal-scales approaching the ion Larmor radius (k_yρ_i≲ 1), propagates in the ion diamagnetic direction, and is closely related to the ideal ballooning mode of magnetohydrodynamics (MHD) <cit.>. MTMs excite radially localised current layers on rational surfaces, are primarily driven unstable by the electron temperature gradient, and propagate in the electron diamagnetic direction. They generate magnetic islands on rational surfaces that tear the confining equilibrium flux surfaces and enhance electron heat transport through magnetic field line stochasticisation <cit.>. In devices where β exceeds a certain threshold value, electromagnetic instabilities can become the fastest growing instabilities in the system and the dominant sources of transport in the plasma core <cit.>. These two instabilities, and the nonlinear interactions between them, will likely play a crucial role in setting the transport levels in the core of devices such as STEP, and dictate the confinement times attainable in next-generation STs such as STEP. Fully understanding the transport impacts of these modes is one of the major physics questions which must be answered to build confidence in the feasibility of designs of future ST power plants. In this work, the first of two related papers, our contributions are: (a) to report on the main results of the gyrokinetic linear analysis of two candidate STEP equilibria, STEP-EC-HD-v5[SimDB UUID: 2bb77572-d832-11ec-b2e3-679f5f37cafe, Alias: smars/jetto/step/88888/apr2922/seq-1] (hereinafter STEP-EC-HD) and STEP-EB-HD-v4[SimDB UUID: 8ea23452-dc00-11ec-9736-332ed6514f8d, Alias: twilson/jetto/step/88888/may2422/seq-1] (hereinafter STEP-EB-HD), at various surfaces between the core and the pedestal top; (b) to identify the dominant and sub-dominant instabilities and elucidate the nature of these modes; (c) to explore the resolution requirements for nonlinear simulations, thus paving the way for the companion work <cit.> (hereinafter referred to as paper (II)), in which we will present the first local nonlinear turbulence simulations for a STEP conceptual design. We begin in sec:equilibria by introducing the STEP equilibria[We remark that the STEP plasma design has not been finalised and these equilibria are thus subject to change.] and the associated plasma parameters, providing some motivation of the design choices and a brief discussion of how the equilibria compare to similar ST design points <cit.>. In sec:overview of simulation results, we present the main results of the gyrokinetic linear analysis of the STEP-EC-HD and STEP-EB-HD equilibria at various surfaces between the core and the pedestal top. This analysis reveals the importance of two particular electromagnetic instabilities, a hybrid KBM and a collisional MTM; in sec:hybrid TEM/KBM instability, sec:stabilising the hybrid KBM, and sec:subdominant MTM instability, we explore the salient features of these modes. In sec:code_comparison, we present the results of a three-code comparison. Finally, we present our conclusions and outlook in sec:conclusions. § THE STEP-EC-HD AND STEP-EB-HD EQUILIBRIA STEP is a UK programme that aims to demonstrate the ability to generate net electricity from fusion. STEP is planned to be a compact prototype power plant (based on the ST concept) designed to deliver net electric power P > 100 MW to the national grid <cit.>. The first phase of this ambitious programme is to develop a conceptual design of a STEP Prototype Plant (SPP) and STEP Plasma Reference (SPR) equilibria for preferred flat-top operating points. The ST concept maximises fusion power P_fus∝ (κβ_NB_t)^4/A <cit.> and bootstrap current fraction f_BS = I_BS/I_p in a compact device at relatively low toroidal field by allowing operation at high normalised pressure β_N≃ 4 - 5 and high elongation κ > 2.8. However, alongside these advantages, the ST concept also poses unique challenges, not only in terms of plasma microstability (the focus of this work) but also in terms of the engineering constraints; the compactness restricts significantly the available space for a solenoid, so the required plasma current of I_p≃ 20 MA has to be driven, ramped up and ramped down non-inductively. STEP plasma concepts have been designed <cit.> using the integrated modelling suite JINTRAC <cit.>, to model transport and sources self-consistently in the core plasma with prescribed boundary conditions: simplified models are used for the pedestal boundary conditions, pellet fuelling, heating and current drive, and core transport uses an empirical Bohm-gyro-Bohm (BgB) model which has been tuned both to give dominant electron heat transport as observed experimentally in MAST, and also to give a desired β_N (see Table <ref>). In the present design of the operating points, plasma confinement is assumed. The design points use the minimum H factor for which a non-inductive operating point can be achieved given a set of other constraints. The primary drivers that constrain the confinement needed for a viable operating point include a specified fusion gain Q > 11 (a proxy for net electricity generation), a specified fusion power P_fus > 1.5 GW, EC and EBW current drive efficiency validated against full wave modelling, P_aux < 160 MW, a current profile consistent with MHD, vertical stability and divertor shaping constraints (see <cit.> for further details). Importantly, the STEP parameter regime is outside the range of validity of the most advanced reduced core transport models available, typically developed for present-day conventional tokamaks, which often do not capture the electromagnetic (EM) transport expected to prevail in STEP, as such, it is important to test the assumptions of BgB transport using linear and nonlinear GK simulations; a key thrust of this current work. Here, we focus on two steady-state, non-inductive flat-top operating points, STEP-EC-HD and STEP-EB-HD, both of which are designed to deliver a fusion power P_fus∼ 1.8 GW. These two designs both use RF heating instead of neutral beams to generate the current drive, in order to maximise the wall area available for Tritium breeding and minimise the recirculating power fraction <cit.>. There are modest differences between these equilibria because they use different RF current drive schemes: * STEP-EC-HD utilises only Electron Cyclotron Current Drive (ECCD) heating. * STEP-EB-HD utilises a mixture of ECCD and Electron Bernstein Wave (EBW) heating. Key global parameters of the preferred flat top operating points are shown in Table <ref> (see highlighted columns), and a contour plot of the magnetic flux surfaces in these two design points is shown in Figure <ref>, alongside the corresponding electron density and electron temperature radial profiles as functions of the normalised poloidal flux Ψ_n. In this paper we will perform linear microstability analysis on the surfaces Ψ_n=0.36,0.49,0.58,0.71 in STEP-EC-HD, and Ψ_n = 0.35 in STEP-EB-HD. Our primary focus will be on the flux surfaces where q = 3.5 (Ψ_n=0.49 of STEP-EC-HD and Ψ_n=0.35 of STEP-EB-HD) and where q = 3 (Ψ_n = 0.36 of STEP-EC-HD) and these are the only surfaces which will be considered in the three-code benchmark reported in sec:code_comparison. Table <ref> also provides key global equilibrium parameters for two other recently developed conceptual burning ST plasma equilibria: the TDoTP high q_0 case <cit.> and an earlier concept BurST <cit.>. For each flux surface considered, a Miller parameterisation <cit.> was used to model the local plasma equilibrium. Miller parameters were fitted to the surface using <cit.>, a python library aiming to standardise gyrokinetic analysis between different GK codes and conventions. was also used throughout to facilitate the conversion of input files between the different GK codes used in this work (see the three code comparison reported in sec:code_comparison). Table <ref> reports the local value of the normalised poloidal magnetic flux Ψ_n, magnetic shear, ŝ=(ρ/q)dq/dρ, radial position, ρ=r/a, elongation and its radial derivative, κ and κ', triangularity and its radial derivative (the symbol ' denotes derivative with respect to ρ), δ and δ', the radial derivative of the Shafranov shift, Δ', the electron β, β_e = 2μ_0 n_e T_e/B_T^2, and electron and deuterium density and temperature gradients at different flux surfaces corresponding to low order rational values of the safety factor q. For each surface, we report the value of the binormal wavenumber k_yρ_s corresponding to the toroidal mode number n=1, with ρ_s = c_sD/Ω_D where c_sD = √(T_e/m_D), Ω_D = eB/m_D and m_D the deuterium mass. Nominally, fives species (electron, deuterium, tritium, thermalised helium ash, and a heavy impurity species) are included in the integrated modelling of the STEP-EC-HD and STEP-EB-HD equilibria considered in this paper. The simulations performed in this paper are carried out with 3 kinetic species (electron, deuterium, and tritium) unless explicitly stated otherwise and a future work will explore the influence of fast α particles which are completely neglected in this analysis. § OVERVIEW OF SIMULATION RESULTS We begin by finding the dominant linearly unstable modes at an initial ballooning angle of θ_0 = 0, i.e., those modes centered on the outboard midplane. For now, we focus our attention on STEP-EC-HD, with the results for STEP-EB-HD reported in sec:code_comparison. The linear simulations presented here are carried out with the gyrokinetic code GS2 <cit.> (commit ). Later (in sec:code_comparison) we will verify the fidelity by comparing the main results obtained with GS2 against CGYRO <cit.> (commit ) and GENE <cit.> (commit ) in a detailed three-code benchmark sec:code_comparison. Table <ref> indicates grid parameters used in each code (see highlighted columns for GS2) for calculations that include (FP) or neglect (NBP) the compressional magnetic perturbation δ B_∥. We find that neglecting δ B_∥ is sufficient to suppress the dominant instability in our simulations, see sec:subdominant MTM instability. Otherwise, the physics included in NBP is the same as that in FP simulations, evolving three kinetic species (electrons, deuterium and tritium). The linearized Fokker-Planck collision model of <cit.> is used to model collisions in the system. In this Section, we will report primarily on simulations of the dominant instability (using parameters in the FP column), and a thorough discussion of simulations of the subdominant instability (using parameters in the NBP column) will be deferred to sec:subdominant MTM instability. We begin by performing linear initial value calculations to find the dominant unstable modes (i.e., the fastest growing unstable mode) across a range of different binormal wavenumbers. In Figure <ref>, we plot growth rate, γ, and mode frequency, ω, (both normalised to the ion sound frequency) as a function of the normalised perpendicular binormal wavenumber, k_y ρ_s, at various radial locations corresponding to low q rational surfaces: Ψ_n = 0.36 (q=3.0); Ψ_n = 0.49 (q=3.5); Ψ_n = 0.58 (q=4.0); and Ψ_n = 0.71 (q=5.0). §.§ Electron Larmor radius scale modes We begin by noting that there is no purely electron scale instability in the system at any of the core flux surfaces considered (see Fig. <ref>), a result which is due to the large β compared to conventional tokamaks. We also remark here that we see a distinct absence of the collisionless MTM which tends to dominate the instability spectrum at intermediate scales k_yρ_s = 𝒪(1) in similar ST equilibria <cit.>; the absence of the collisionless MTM is due to the larger value of the density gradient owing to pellet fuelling (which strongly stabilises the collisionless MTM). §.§ Ion Larmor radius scale modes Approaching the binormal Deuterium Larmor radius scale k_yρ_s≲ 1, the stability landscape is somewhat more complicated. For clarity, we group the flux surfaces by spectra structure; §.§.§ q = 3.5 (Ψ_n = 0.49) From Figure <ref>, we note that the maximum growth rate occurs approximately at k_yρ_s ≃ 0.4 for the surface at Ψ_n = 0.49. Interestingly, the growth rate as a function of k_yρ_s has two local maxima, similar to that seen in similar ST designs <cit.>. The mode frequency is positive (i.e., the most unstable mode is propagating in the ion diamagnetic direction) for all unstable k_y ρ_s modes (i.e., those modes where γ>0). The mode is stable as we approach the sub-ion Larmor radius scale (k_yρ_s > 0.65, n > 130) but is unstable down to very long wavelengths (k_yρ_s = 0.023, n = 5). §.§.§ q = 4.0 (Ψ_n = 0.58) and q = 4.5 (Ψ_n = 0.71) Similarly to Ψ_n = 0.49, the maximum growth rate occurs on both surfaces approximately at k_yρ_s ≃ 0.4. For Ψ_n = 0.71, the growth rate spectrum once again has two local maxima. For Ψ_n = 0.58, there is a single local maxima but we observe a similar plateau structure in the growth rate spectrum. One key difference with respect to the q = 3.5 (Ψ_n = 0.49) surface is that the longest wavelength unstable modes have weakly negative mode frequency (i.e., the mode is propagating in the electron diamagnetic direction). We note however, that the growth rate and frequency vary smoothly as k_yρ_s decreases (c.f., the abrupt change of sign in the real frequency between the unstable modes k_yρ_s < 0.65 and the stable modes k_yρ_s > 0.65 on the Ψ_n = 0.49 surface) suggesting that this is perhaps not a discrete mode transition but instead is a change in the nature of the dominant instability (see sec:hybrid TEM/KBM instability for further discussion). §.§.§ q = 3.0 (Ψ_n = 0.36) The maximum growth rate moves to slightly longer wavelengths k_yρ_s ≃ 0.2 at Ψ_n = 0.36, though it once again possess two local maxima. Again, we observe a slightly different dependence of the mode frequency on k_y ρ_s at Ψ_n ≃ 0.36 compared to the q = 3.5 (Ψ_n = 0.49) surface: the frequency increases at low k_y, reaching a maximum around k_yρ_s ≃ 0.3, the frequency then decreases and changes sign at k_yρ_s ≃ 0.4. Once again, we note that this change in frequency occurs smoothly. The remainder of this manuscript is largely devoted to studying the linear instabilities identified in the STEP-EC-HD equilibrium, focusing in particular on the q = 3.5 flux surface (Ψ_n = 0.49), unless otherwise explicitly indicated. § HYBRID KBM INSTABILITY Based on previous results (see e.g., <cit.> and references therein), we might expect in these high β plasmas that some of the instabilities in Figure <ref> propagating in the ion diamagnetic direction are electromagnetic KBMs, especially where the local equilibrium profiles do not access second stability <cit.> This section is dedicated to studying the physics of the dominant instability identified in Figure <ref>. §.§ Is the mode electromagnetic or electrostatic? A sensible first step towards classifying and understanding this instability is to examine whether the mode is predominantly electrostatic or electromagnetic, this can be done by examining the eigenfunctions of the dominant instability identified in sec:overview of simulation results. In Figures <ref>-<ref>, we plot the δϕ and δ A_∥ eigenmode structures (both normalised to the maximum value of δϕ) as functions of ballooning angle θ, at k_yρ_s≃0.2 (Figure <ref>) and at k_yρ_s≃0.4 (Figure <ref>) for the flux surfaces with Ψ_n = 0.36 and Ψ_n = 0.49 respectively. We note that the amplitudes of δϕ and δ A_∥ are comparable, thus suggesting that * the mode is predominantly electromagnetic. Electrostatic instabilities are typically characterised by |δ A_∥| ≪ |δϕ|. At both radial locations, the mode is strongly peaked around θ = 0, with even parity in ϕ and odd parity in A_∥. Conventionally, even parity ϕ modes are called ‘twisting parity’ and odd parity ϕ modes are called ‘tearing parity’. Therefore, * the mode has twisting parity. Based on properties <ref> and <ref>, and in agreement with previous results in <cit.> with a similar parameter regime, the dominant instability may be associated with a KBM. We can investigate this further by examining whether the mode is indeed active where the equilibrium profiles do not access second stability. §.§ Is the mode a KBM? A general description of KBMs was presented by <cit.> and <cit.>, in which the linear electromagnetic gyrokinetic equation is solved for the gyrokinetic distribution function in terms of the perturbed fields δϕ, δ A_∥, and δ B_∥, and the expression for the gyrokinetic distribution function is inserted into the field equations. This results in three coupled, linear, integro-differential equations which may then be solved in certain limits. In theory, one could analyze this system of equations to determine which design choices (e.g., shaping) are beneficial for KBM stability. However, the complexity of these equations make it difficult to assess whether kinetic effects have a net stabilising or net destabilising effect beyond simple limits <cit.>. In a complex physical system such as an ST, accurately describing KBMs thus typically requires gyrokinetic simulations to explore the sensitivity of these modes. As a first step, we investigate the stability with respect to the ideal ballooning boundary, which is often used a simple proxy for KBM stability. §.§.§ The Ideal Ballooning Mode The KBM instability is often associated with the MHD ideal ballooning mode (IBM) in the limit of n →∞, which is derived from ideal MHD and thereby neglecting kinetic effects such as the finite Larmor radius and the effect of trapped particles. Despite making considerable simplifications to the physics, the IBM still describes the basic physics of the pure KBM instability; a competition between the stabilising effect of magnetic field line bending and the destabilising effect of a plasma pressure gradient combined with “bad” magnetic curvature. Moreover, IBM stability is much more easily assessed for a given plasma, and is sometimes used as a proxy for KBM stability in models such as the predictive pedestal model EPED (see e.g., <cit.>) and a good correlation is generally found in the pedestal of conventional tokamaks between the region where KBMs dominate and the region that is unstable to n →∞ ideal ballooning modes (see e.g., <cit.> and discussion therein). An approach pioneered by <cit.> allows one to calculate stability quickly and easily by integrating a one-dimensional differential equation for a given field line. This has been numerically implemented in GS2’s module . Moreover, for some fixed set of geometric parameters, can be used to scan the normalised pressure gradient α = -R q^2 dβ/dr, and magnetic shear ŝ≡∂ q / ∂ψ to evaluate IBM stability for a given flux surface as a function of (ŝ,α). We therefore investigate where STEP-EC-HD and STEP-EB-HD are located with respect to the region of IBM stability and whether this is consistent with our suspicion that these equilibria are KBM dominated. The results of these calculations are shown in Figure <ref>, where we see that all of the flux surfaces we have considered are well outside the unstable region. As such, we expect STEP plasma operating in these regions of parameter space to be stable to IBMs, a sensible proxy for KBM stability. This is an important piece of information about the dominant mode. * The dominant mode can be unstable in the region where the IBM is stable. The dominant mode is thus unlikely to be a pure KBM. We mention here that kinetic effects (e.g., corrections to the IBM formalism) may be responsible for some deviation from IBM-like behaviour. However, this observation motivates a careful study of the dominant mode with respect to the different physics parameters, as performed in the following subsection. §.§ Mode fingerprinting Statements <ref> and <ref> indicate that the dominant mode has clear features consistent with the KBM instability, while <ref> seems to suggest a mode with different instability characteristics, i.e. this mode might be KBM-like, but it may also be coupling to some other modes in our system in order to be driven unstable. We can examine whether this might be the case by fingerprinting the mode. Mode ‘fingerprints' <cit.> to identify the instabilities that cause transport losses in modern experiments from among widely posited candidates such as the KBM and others. The key idea underpinning mode fingerprinting is that analysis of both the gyrokinetic-Maxwell equations and gyrokinetic simulations of experiments reveals that each mode type produces characteristic ratios of transport in the density and heat channels. Thus, by examining the electron to ion heat and particle flux ratios, we might shed light on the nature of our instability. The important quantities for fingerprinting analysis are the particle and heat diffusivity, D_α = Γ_α / ( n_α / r), and χ_α = [Q_α - (3/2)T_αΓ_α] / (n_α T_α / r), where α is the species label and Γ and Q denote the particle and heat flux respectively. The mode fingerprints identified in <cit.> are reported in Table <ref> for the k_yρ_s = 0.2 mode of q = 3.5 (Ψ_n = 0.49). Comparing our simulation results with the fingerprint identifiers given in Table 1 of <cit.> we see that our dominant instability does indeed have features in common with MHD-like modes (including the KBM) which cause very comparable diffusivities in all channels, and are characterised by |δ E_∥| ≪ |δϕ| (see Figures <ref> and <ref>). However, we remark that this fingerprint may also be also compatible with the ion-temperature gradient mode (ITG) and trapped electron mode (TEM). * The mode can be fingerprinted as a KBM or ITG/TEM. Observation <ref> provides us with a way to reconcile <ref> with <ref> and <ref>, the dominant mode is likely a hybrid instability. We now wish to study this hybrid instability. §.§ Sensitivity to different physics parameters and hybridisation of the KBM Based on our fingerprinting analysis, we have deduced that that the dominant instability is likely a hybrid mode which shares the features of the KBM, ITG, and TEM. In the following, we attempt to further characterise the main instability by analysing the dependence on the inclusion of collisions, the numbers of species, the local gradients, the magnetic shear and safety factor, and β. §.§.§ Pressure gradient When the local geometry is held fixed, the growth rate of a pure KBM increases with the total pressure gradient and β. Thus, assessing the sensitivity of the dominant mode to these parameters allows us to test to what extent the mode is KBM-like. In Figure <ref> we explore the dependence of the mode on the electron and ion temperature gradients, a/L_Te and a/L_Ti, as well as on the density gradient[Note that we do not vary the electron and ion density gradients independently since quasineutrality requires n_e=n_i which in turn demands a/L_n_e = a/L_n_i globally).] a/L_n, whilst all other parameters are held constant. Here, we note that both ion species temperatures are changed together. Figure <ref> reveals another important characteristic of the hybrid mode, * the growth rate is sensitive to changes in the pressure gradient (a KBM-like feature). From comparing Figures <ref> (a) and (b) we note that the growth rate appears to depend more strongly on the electron temperature gradient than on the ion temperature gradient for both of the binormal wavenumbers considered. In Figure <ref>, we show a comparison of the linear spectrum for two linear simulations with the same total kinetic pressure gradient but different electron temperature gradients; one with the nominal electron temperature gradient (blue markers) and one with zero electron temperature gradient (orange markers). In the second simulation, the total pressure gradient is kept fixed by putting the electron temperature gradient contribution into the ion temperature gradient. * The growth rate is sensitive to how the pressure gradient is varied - i.e. the mode is sensitive to the partitioning of the pressure gradient into electron and ion contributions. If the instability was the MHD-like KBM, the growth rate would be the same and the two curves would be coincident[Strictly speaking, this is only true if T_i = T_e. In our simulations T_i/T_e = 1.03 and thus any deviation from MHD-like KBM behaviour might be expected to have more sensitivity to the ions - the opposite of what we observe.] . However, what we observe is that the growth rate is much smaller in the case with zero electron temperature gradient across all scales. When understood alongside Figure <ref> we thus deduce that this mode is indeed much more sensitive to changes in the electron temperature gradient than the ion temperature gradient. This suggests the KBM is hybridising with an electron instability (see e.g., <cit.>) and so <ref> and <ref> reinforce <ref>, that is they suggest the mode is KBM-like, but has some important deviations from the pure KBM due to coupling to another mode. §.§.§ Collisions The impact of collisions is analysed in Figure <ref>, which shows the growth rate and mode frequency from GS2 linear simulations with and without collisions. We note that the growth rate values in the collisional case are lower than in the collisionless case, while the mode frequency is largely unaffected by collisions. That is, at the level of collisionality in this local equilibrium, collisions have a weakly stabilising effect on this dominant mode. This result is consistent with our fingerprinting analysis subsec:fingerprinting which highlighted the TEM and the ITG modes as potential culprits for hybridising with the KBM. Both the ITG and TEM modes are destabilised by the trapped electron population, and it is therefore not surprising that both are stabilised by collisions since these cause detrapping of the trapped population <cit.>. The KBM on the other hand, tends to be weakly destabilised by collision <cit.>. §.§.§ Species One can also explore the role of the different kinetic species in the simulation. Figure <ref> shows the growth rate and mode frequency from GS2 simulations with two (electron and deuterium), three (electron, deuterium, and tritium) and five species (electron, deuterium, tritium, thermalised helium ash, and a heavy impurity). In each instance, we ensure that the quasineutrality constraint is satisfied by adapting the value of the electron density gradient. Although there are some small quantitative differences between the three simulations, no qualitative change on the main instability is observed by varying the species number. This indifference to the plasma ion composition once again points towards the KBM (which are typically sensitive to the total pressure gradient rather than to the contribution of individual species) coupling with an electron instability[We note that the linear spectrum is slightly more sensitive to the plasma composition when a different collision model is used, although there is no qualitative change in the main instability. See <ref> for details.]. In this paper we primarily consider linear simulations with three species (electrons, deuterium and tritium). §.§.§ Safety factor and magnetic shear The instability also shows some sensitivity to changes to the local equilibrium parameters such as the magnetic ŝ and the safety factor q, (see Figure <ref>). Panel (a) of Figure <ref> shows how the dominant mode is destabilised by increasing ŝ, consistent with the ideal ballooning mode behaviour (i.e. a KBM-like behaviour). The dependence of the mode on q (Panel (b) of Figure <ref>) is slightly more complicated but is also consistent with the behaviour of the ideal ballooning mode. As q increases the stability boundary of the ideal ballooning mode moves towards higher ŝ and lower β^'. Therefore, increasing q will make access to the second stability region easier (which is why we see the mode stabilised with increasing q from the reference value). At sufficiently low q, the stability boundary gets pushed to higher β^' enabling the equilibrium to lie in the first stability region (see <cit.> for a more careful discussion of first and second stability) which is consistent with the stabilisation of the KBM-like dominant mode. We note that the global MHD equilibrium is not varied consistently in these cases. §.§.§ Trapped particles Thus far, our sensitivity study has shown once again that our dominant mode has many properties in common with the KBM, but also has some non-KBM-like properties. The fingerprinting analysis in subsec:fingerprinting suggested that the non-KBM-like properties might be due to coupling to a TEM/ITG (<ref>). A further investigation of the dominant mode is presented in Figure <ref>, which shows the growth rate and mode frequency as a function of k_yρ_s from four GS2 linear simulations: (i) the nominal simulation, (ii) a simulation with hybrid electrons, where the passing electrons are treated adiabatically (i.e., the passing particles have a Maxwellian response to δϕ perturbations), while trapped electrons are treated kinetically, (iii) a simulation with adiabatic electrons, and (iv) a simulation with adiabatic ions. We note that the hybrid and kinetic electron curves (i.e., (i) and (ii)) follow each other closely, although the hybrid electron curve is not suddenly stabilised at k_yρ_s > 0.6 (note also there is no sudden change in mode frequency). We thus determine that; * the dominant instability has a substantial drive from trapped electrons Furthermore, we note that the simulation with adiabatic electrons is marginally stable at all binormal scales, once again highlighting the importance of kinetic electrons. The ion dynamics also provide an important drive for this instability, this can be seen by noting that the growth rate is strongly reduced in the simulation with adiabatic ions. §.§ Labelling the dominant mode From the careful analysis presented here, we conclude that the the most likely candidate for this instability is a hybrid TEM/KBM. We also note that a similar hybrid TEM/KBM instability has been also observed in <cit.> in NSTX gyrokinetic simulations. However, sensitivity scans in β_e and β' (discussed later in subsec:beta) show that this instability can also be tracked to the electrostatic limit where it connects to a ion temperature gradient (ITG) mode, and Figure <ref> clearly highlights the importance of ion dynamics. This may imply an additional coupling with the ITG instability (see e.g., <cit.>). In summary, we find that: * The mode generally propagates in the ion-diamagnetic direction (Figure <ref>). * The mode is electromagnetic (<ref>). * The mode eigenfunction is strongly peaked in ballooning space and the mode has tearing parity (<ref>). * The mode is driven by the pressure gradient (<ref>) and by β (see discussion in subsec:beta). * The mode is unstable in a regime well below the ideal n=∞ MHD limit. * The mode is sensitive to how the pressure gradient is varied e.g., it varies more strongly with the electron temperature gradient than with the ion temperature gradient (<ref>). * The mode is driven by trapped electrons (<ref>). * The mode couples smoothly to an electrostatic instability (see discussion in subsec:beta). * The mode requires access to δ B_∥ drive in order to be unstable (see discussion in sec:subdominant MTM instability). Henceforth, we will denote this mode as a hybrid KBM/TEM/ITG or simply hybrid KBM. We highlight that ultimately this is just a convenient label to refer to the properties stated above. § STABILISING THE HYBRID KBM It should be emphasized here that understanding the nature of this mode is of the upmost importance for studying the high performance phase in conceptual ST reactors similar to STEP. Earlier studies for similar high beta conceptual burning ST plasmas <cit.> found MTMs dominating over several ranges in k_y ρ_s, and inferred that MTMs could cause substantial transport. Here, however, for STEP STEP-EC-HD and STEP-EB-HD (see sec:code_comparison for STEP-EB-HD results) we find that hybrid modes dominate at all scales across a range of equilibria at various surfaces between the deep core and pedestal top. Indeed, Paper (II) will show that it is actually this hybrid KBM instability which is responsible for driving most of the heat flux in the STEP plasmas considered in this work. It is thus important to understand the nature of this mode and find strategies to mitigate it. §.§ Sensitivity to β_e and β^' Motivated by the KBM-like properties of the dominant instability, we begin by studying the sensitivity of the mode with respect to β. If the dominant mode was a simple KBM, we would expect the mode to be stable below some finite value of β, and for the growth rate to scale with β above this threshold. §.§.§ Varying β_e with β^' fixed In Figure <ref>, we study the impact of varying β whilst the other parameters (notably β^') are held fixed (the nominal value is denoted by a red vertical line). We note that in this case the mode is stabilised as β is dropped (and the growth rate increases when β is increased) which indicates that the dominant mode is accessing the electromagnetic component of the drive terms (the electromagnetic component of the drive is reduced at lower β, while the stabilising effect of β' is retained). This behaviour is broadly consistent with what we might expect to see for simple KBM. Later, in sec:subdominant MTM instability, we will see that the hybrid-KBM necessitates the inclusion of parallel magnetic fluctuations δ B_∥ in order to access the electromagnetic drive. This result is in line with earlier works <cit.> that find parallel magnetic fluctuations act to destabilise the KBM and an absence of δ B_∥ effects lead to a decrease of the KBM growth rate up to a factor of 6 <cit.>. However, this is even more severe in simulation of the hybrid-KBM, which we find to be everywhere stable when δ B_∥ is neglected. §.§.§ Varying β_e and β^' consistently In Figure <ref> we study the effect of varying β whilst also varying β^' consistently with the local equilibrium, following an approach similar to that outlined in <cit.>. Figure <ref> reveals that this hybrid KBM mode remains unstable even in electrostatic limit (β = 0, β^' = 0). The smooth variation of the growth rate (and the real frequency) with β indicates that the mode is coupling to some electrostatic instability which prevents stabilisation. This is another feature of the hybrid-KBM which markedly distinguishes it from the simple KBM (which would be stable in an electrostatic simulation). We remark that the behaviour seen here is consistent with coupled KBM-ITG, and similar behaviour has been seen in both theory <cit.> and experiment <cit.>. Figure <ref> also shows that the growth rate is reduced at higher β due to β' stabilisation, noting that a mode transition occurs as β passes through β∼ 0.13 (note the abrupt change of sign of frequency and the further reduction of the growth rate). At these values of β and β', the hybrid KBM is fully stabilised, revealing the underlying subdominant MTM instability (see sec:subdominant MTM instability). This suggests that increasing β together with β^' can result in the stabilisation of the dominant hybrid mode. As a complement, Figure <ref> shows the linear growth rate spectrum for three different values of β with consistently varied β^' - the nominal case (orange markers) and a lower and higher β case (blue and green markers). In the case with higher β, the hybrid KBM instability vanishes and the MTM instability (see sec:subdominant MTM instability) becomes the most unstable mode in the system. In the case of lower β, the ITG instability drives an unstable mode with a higher growth rate than the most unstable hybrid mode at the nominal β value. §.§.§ Implications for the current ramp We remark here that, as mentioned in sec:equilibria, one of the major challenges for STEP is the need to generate the required plasma current of I_p≃ 20 MA. Although Figures <ref> and <ref> demonstrate that a high beta regime free of the hybrid-mode exists, it is less clear how the hybrid mode could be avoided on the approach to such a flat-top during the I_p ramp. During the current ramp, the plasma equilibria will evolve continuously from a β = 0 state, where it will be dominated by electrostatic instabilities, up to the reference β where it is dominated by the hybrid KBM. However, we have seen in Figures <ref> and <ref> that the hybrid KBM becomes active at much smaller β than that which we are aiming to achieve in the flat top. Thus, in getting to this equilibria one must first pass through a region where the hybrid-KBM is active, and this could shut down the evolution of the plasma e.g., if the turbulent transport were too large to sustain the profiles (see Paper (II) for further discussion). It is currently not clear whether it's possible to avoid the onset of the hybrid-KBM completely or how much heating and fueling would be required burn through it should it appear earlier in the current ramp. It is also worth remarking that the β_e = 0.16 case will probably exceed the resistive wall mode control limit and thus is likely not viable for other reasons. §.§ Sensitivity to θ_0 Sheared ExB shearflows in linear local gyrokinetic simulations can be modelled by introducing a time-dependence into the radial wavenumber k_x, which corresponds to the ballooning parameter, θ_0=k_x,0/k_yŝ. For a mode in ballooning space at a given k_y, the dependence of growth rate on θ_0 is a useful indicator of the mode's susceptibility to flow shear stabilisation <cit.>. If the mode is stable at some θ_0, then when flow shear advects the mode it can be moved into a stabilising region, reducing its effective growth rate. Figure <ref> shows the growth rate and frequency of the dominant mode at k_yρ_s=0.2 and k_yρ_s=0.3 as a function of θ_0 for the surface at Ψ_n=0.49 of STEP-EC-HD. At k_yρ_s=0.3, the growth rate is strongly suppressed as θ_0 increases, with the hybrid KBM instability being stable already at θ_0 ≥π/8. The growth rate at k_yρ_s=0.2 decreases as θ_0 increases from 0 to π/4. At θ_0 ≃π/4, the mode is stable and remains close to the marginal stability until θ_0 ≃ 3π/4. At k_y ρ_s=0.2, a different instability propagating in the electron drift direction becomes unstable at θ_0 ∼π. The high sensitivity of the hybrid KBM instability to θ_0 suggests a possible important effect of flow shear, a relationship which will also be explored further in Paper (II). We note that a strong dependence of a KBM-like instability on θ_0 was also observed in a similar STEP conceptual design <cit.>, where it was noted that in the local equilibrium studied flow shear effects may largely suppress transport from KBMs[It is important to remark that the stiffness of the pure KBM counters this argument by suggesting that even a small increase in drive could compensate for any stabilisation. It is also worth mentioning that in the pedestal of conventional aspect ratio tokamaks, it has been noted that owing to the stiffness of KBM transport, the KBM may still play a role in limiting gradients close to the critical value even when the mode is marginally stable <cit.>]. This motivates the analysis of the subdominant instability. § SUBDOMINANT MTM INSTABILITY We now turn our attention to the subdominant MTM instability, which may play an important transport role, especially if the hybrid mode is effectively stabilised by flow shear. MTMs generate magnetic islands on rational surfaces that tear the confining flux surfaces and generate heat transport primarily through the electron channel <cit.>. We note that local GK simulations have revealed MTMs as the dominant microinstabilities in the wavenumber range k_y ρ_s<1 locally at mid-radius (where β_e ∼ 5%-10%) in several spherical tokamak plasmas (see <cit.> and references therein). GK studies for the high performance phase in conceptual ST reactors have also found MTMs dominant over an extended range of binormal scales and likely to have significant impacts on transport <cit.>. The presence of the fastest growing tearing mode can be investigated by enforcing the tearing (odd) parity of the perturbed distribution function, exploiting the up-down symmetry of the Miller equilibrium. This test is carried out with GS2 at θ_0=0 and the results are shown in Figure <ref> (see orange curve). We thus see that there are in fact unstable modes with tearing parity (e.g., MTMs), but on this surface in STEP-EC-HD these are always subdominant to the hybrid KBM. Another way to obtain the MTM from an initial value solver as the fastest growing mode in our system specifically, without forcing the parity of the eigenmode, is to simply switch off compressive magnetic perturbations i.e., we exclude the δ B_∥ contribution to the GK equation. Figure <ref> shows that the simulation neglecting δ B_∥ (green) is equivalent to the nominal simulation with a tearing parity initial distribution function (orange). Essentially, removing δ B fluctuations from the system stabilises the hybrid KBM, whilst having no impact on the MTM and thus leaving the MTM as the dominant mode. The eigenfunctions of ϕ and A_∥ corresponding to the MTM at k_yρ_s = 0.14 are shown in Figure <ref>. We find that the A_∥ fluctuation is significantly larger than the electrostatic fluctuations close to the inboard midplane, as expected for MTMs. The electrostatic potential eigenfunction exhibits a clear multiscale structure (ion-scale in k_y, electron scale in k_x), with a narrow oscillatory structure in θ overlaying a much broader oscillation. The A_∥ function is more strongly peaked about θ = 0, with subsequent peaks occurring along the field line at θ mod 2π = 0, the outboard midplane. Similar MTM eigenfunctions extended in ballooning angle have been seen in simulations of MAST <cit.> and NSTX <cit.> discharges and BurST <cit.>. The extended nature of these modes, requiring a parallel domain θ∈ [-70π, 70π], coupled with a very small growth rate, means that even linearly resolving the subdominant MTM can become very computationally expensive. Nonlinear simulations involving the MTMs in Figure <ref> will be computationally challenging in these STEP plasmas, owing to the intrinsic multiscale character of the MTM in the radial direction (which is linked to its multiscale character in θ in ballooning space). Figure <ref> illustrates how the MTM growth rates (for modes at k_yρ_s=0.1 and k_yρ_s=0.3) depend on θ_0, showing that the MTM growth rate (particularly at k_yρ_s=0.1) is much less sensitive than the KBM hybrid growth rate (see Figure <ref> of subsec:stabilising the hybrid KBM theta) this therefore suggests that these MTMs should be much less susceptible than KBM hybrid modes to flow shear stabilisation. We note that tokamak regimes exist where turbulent transport from MTMs is affected by flow shear stabilisation <cit.>. Recent theoretical work has identified an important local equilibrium parameter that helps explain this <cit.>, and the relevance of this parameter in experiments and numerical simulations is explored in <cit.>. The insights gained here may be helpful in the future optimisation of STEP design points. § CODE COMPARISON Careful benchmarking is essential for ensuring the fidelity of GK simulations in next-generation reactor design, and to identify (and ideally rectify) issues that may arise in simulations using any single code (see e.g., the discussion of the numerical instability in paper II). Furthermore, this benchmarking also paves the way for the detailed nonlinear investigation of the companion article. In this section, we compare the results of CGYRO, GENE and GS2 linear GK simulations carried out at the radial surfaces corresponding to q=3.0 (STEP-EC-HD and STEP-EB-HD) and q=3.5 (STEP-EC-HD only) equilibria. As previously, we compare simulations and results (linear eigenvalues and eigenmodes) for both the hybrid KBM instability and the subdominant MTM instability. As before, the numerical resolutions used in these simulations are listed in Table <ref>, where again we resort to different resolutions for simulations of the hybrid KBM instability and simulations of the subdominant MTM instability. These simulations evolve three species (electron, deuterium and tritium) and include both perpendicular and parallel magnetic fluctuations, δ A_∥ and δ B_∥ for simulations of the hybrid KBM whilst including only δϕ and δ A for MTM simulations. In each code, we aim to use the most advanced collision operator and thus the Sugama collision model <cit.> is used in CGYRO and GENE, while the linearized Fokker-Planck collision model of <cit.> is considered in GS2. Figure <ref> compares the growth rate and the mode frequency at Ψ_n=0.49 of STEP-EC-HD, and a reasonable agreement is found between all three codes. We note that it is of no great surprise that there is some very small variation between the growth rates since; e.g., each code employs different discretisations of the 5D space. The comparison for the subdominant MTM instability is shown in Figure <ref>. Retrieving a good agreement here is much more challenging, since the growth rates are relatively small and therefore more sensitive to the different numerical implementations and dissipation employed in the three codes (see <cit.> for code-specific details). In addition, it was found that numerical convergence in these simulations required a very high pitch-angle resolution (CGYRO) and a very high θ resolution (GENE) in order to capture the parallel structure of these very extended modes (<ref>). The resolutions used in these simulations are as listed in Table <ref>. We also note that the maximum growth rate differs by less than 20 % when a lower resolution is considered, thus motivating the lower numerical resolution used in some of the nonlinear simulations of Paper (II). As shown in Figure <ref>, we can see that all three codes show a good agreement on the eigenfunctions for both the dominant and subdominant modes. The three code comparison is also carried out on the q = 3.0 flux surface of STEP-EC-HD; the dominant instability is shown in Figure <ref> and a comparison for the subdominant MTM instability is shown in Figure <ref>; and also for the q=3.5 surface of STEP-EB-HD; the dominant instability is shown in Figure <ref> and a comparison for the subdominant MTM instability is shown in Figure <ref>. We conclude by noting that there is a good agreement between the three codes in all the considered cases. § CONCLUSIONS In this paper, we have presented the results of local linear microinstability studies of the thermal plasma on a range of flux surfaces from the core to the pedestal top in the two preferred STEP flat-top operating points. We find that the linear spectra is dominated by a hybrid mode, sharing features of the KBM, ITG, and TEM instability, at the ion Larmor scale, with weakly-growing subdominant MTMs present at similar scales. The local equilibria examined here (q = 3.0, 3.5, 4.0, 5.0 in STEP-EC-HD and q=3.5 in STEP-EB-HD) were found to be completely stable to electron scale modes. A summary of the dominant microinstabilities from some of our simulations is given in Table <ref> alongside the results from some similar conceptual designs of burning ST plasmas, namely the TDoTP high q_0 equilibrium taken from <cit.>; and an earlier prototype for a burning ST reactor BurST taken from <cit.>. Shown here is a summary for only the q=3.5 surface in each equilibrium.[Note that data was only available for the q=4.3 flux surface in BurST. However, we remark that the stability properties of the q=4 and q=5 surfaces of STEP-EC-HD are qualitatively identical to those of the q=3.5 surface of STEP-EC-HD. As such, we believe that the comparison between the q=3.5 surfaces of STEP-EC-HD and STEP-EB-HD to the q=4.3 surface of BurST is still relevant for the purposes of broad understanding.] We remark that * The dominant mode shares properties of a hybrid KBM/TEM/ITG. * The mode is electromagnetic (KBM-like) but can be tracked consistently back to the electrostatic limit. * The mode has many features typical of the KBM: it generally propagates in the ion-diamagnetic direction, has eigenfunctions which are strongly peaked in ballooning space and the mode has twisting parity (KBM-like), but the mode is unstable in a regime below the ideal n=∞ MHD limit. * The mode is driven by the pressure gradient (KBM-like) but it is more sensitive to the electron temperature gradient than the ion temperature gradient. * The mode is sensitive to trapped electron dynamics. * The mode is sensitive to θ_0. * The mode growth rate is smaller at larger values of β and β^'. * The mode growth rate is less sensitive to β and β^' at low β values. * The mode requires access to δ B_∥ drive in order to be unstable. * There is no unstable branch of collisionless MTMs are intermediate scales (k_yρ_s ∼ 4) unlike in <cit.> and <cit.> * A collisional MTM is unstable but is always subdominant to the KBM-like instability. We have shown that the hybrid KBM can be stabilised by increasing β'. Due to the strong sensitivity on θ_0, the mode might be suppressed by E×B flow shear, and this is explored in Paper (II) by means of nonlinear turbulent simulations. A detailed three-code comparison involving GS2, CGYRO, and GENE was performed, and we found good agreement between the three codes for a range of different plasma parameters. The result of this benchmark increases our confidence in the fidelity of GK modelling of electromagnetic instabilities, as well as highlighting the need for care in the handling of parallel dissipation in order to avoid encountering numerical instabilities in these challenging computations (see discussion in paper II). The results of the linear stability analysis, and of the cross-code verification, paves the way for the detailed turbulence studies undertaken in the companion article (II). We are indebted to E. Belli, J. Candy, B. Chapman, D. Hatch, P. Ivanov and M. Hardman for helpful discussions and suggestions at various stages of this project. The authors would also like to thank the GENE team – most notably T. Görler and D. Told - for their help and support. The first author is grateful to The Institute for Fusion Studies (IFS), Austin TX, for its splendid hospitality during a stimulating and productive visit. This work has been founded by the Engineering and Physical Sciences Research Council (grant numbers EP/R034737/1 and EP/W006839/1). Simulations have been performed on the Viking research computing cluster at the University of York and on the Marconi supercomputer from the National Supercomputing Consortium CINECA, under the project STETGMTM. Part of this work was performed using resources provided by the Cambridge Service for Data Driven Discovery (CSD3) operated by the University of Cambridge Research Computing Service (<www.csd3.cam.ac.uk>), provided by Dell EMC and Intel using Tier-2 funding from the Engineering and Physical Sciences Research Council (capital grant EP/T022159/1), and DiRAC funding from the Science and Technology Facilities Council (<www.dirac.ac.uk>). To obtain further information on the data and models underlying this paper please contact [email protected]. § NUMERICAL RESOLUTION CONVERGENCE Here we discuss the numerical resolution convergence studies in GS2 linear simulations for the dominant hybrid KBM instability at the q=3.5 flux surface of STEP-EC-HD. Similar resolution convergence scans were performed also with CGYRO and GENE at all the radial surfaces considered in this work, and for both the dominant and subdominant modes. These detailed convergence tests are also used to inform the resolutions used in the nonlinear simulations which are the focus of paper (II). Figure <ref> shows the growth rate and mode frequency at different parallel grid resolutions. Convergence is achieved for n_θ≥ 32 at low mode numbers, while a higher resolution (n_θ≥ 64) is required at k_yρ_s>0.4. Convergence with respect to the grid extent in ballooning space (which is equivalent to the radial grid resolution in the flux tube) is controlled in GS2 by the parameter , and is investigated in Figure <ref>. Growth rate is only slightly affected by this parameter, as expected since for the hybrid KBM both δϕ and δ A_∥ are very localised around θ = 0 (see Figure <ref>). It should of course be noted once again that the MTM has much more stringent conditions on the radial and parallel grid resolutions (see Table <ref>). Velocity space resolution convergence is tested in Figures <ref> and <ref>, where the number of passing pitch-angles and the number of energy grid points are varied. Over these grid parameters there is a weak dependence of the growth rate and mode frequency on the velocity space resolution over the entire k_y spectrum. § DEPENDENCE ON THE COLLISION MODEL Here we briefly discuss the dependence of our simulation results on the collision model. In particular, we observe that the linear spectrum is more sensitive to the plasma composition (the number of species evolved) when the Sugama collision operator <cit.>, a sophisticated approximation to the full linearized, gyro-averaged Fokker-Planck operator, is used instead of the full linearized Fokker-Planck collision operator which is used in all of our GS2 simulations. We first compare the growth rate and mode frequency values from CGYRO and GS2 simulations carried out by evolving two species (see Figure <ref>) and three species (see Figure <ref>) and considering different collision operator models: CGYRO simulations are performed by using the Lorentz and the Sugama collision operators, while GS2 uses the linearized Fokker-Planck collision operator. In all of these simulations, we note that there is only a very weak dependence on the type of collision model employed. However, Figure <ref> shows that growth rate values are more sensitive to the collision model when five species are considered. Interestingly, although there a relatively good agreement observed between CGYRO and GS2 simulations results when the Lorentz operator is used in CGYRO, we find that the growth rate values are smaller (by approximately 20%) when the Sugama collision operator is used in CGYRO. It is important to note however that there is no qualitative difference in the main instability. We will also see in Paper (II) that reducing the linear growth rates by such a small amount makes no significant difference to the transport properties on the surface considered. § REFERENCES unsrt
http://arxiv.org/abs/2307.00234v1
20230701055334
The Potential of LEO Satellites in 6G Space-Air-Ground Enabled Access Networks
[ "Ziye Jia", "Chao Dong", "Kun Guo", "Qihui Wu" ]
cs.NI
[ "cs.NI", "eess.SP" ]
The Potential of LEO Satellites in 6G Space-Air-Ground Enabled Access Networks Ziye Jia, Member, IEEE, Chao Dong, Member, IEEE, Kun Guo, Member, IEEE, and Qihui Wu, Senior Member, IEEEZiye Jia is with the College of Electronic and Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China, and also with the State Key Laboratory of ISN, Xidian University, Xi’an 710071, China (e-mail: [email protected]). Chao Dong and Qhui Wu are with the College of Electronic and Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China (e-mail: [email protected], [email protected]). Kun Guo is with the East China Normal University, Shanghai 200241, China (e-mail: [email protected]). August 1, 2023 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= Space-air-ground integrated networks (SAGINs) help enhance the service performance in the sixth generation communication system. SAGIN is basically composed of satellites, aerial vehicles, ground facilities, as well as multiple terrestrial users. Therein, the low earth orbit (LEO) satellites are popular in recent years due to the low cost of development and launch, global coverage and delay-enabled services. Moreover, LEO satellites can support various applications, e.g., direct access, relay, caching and computation. In this work, we firstly provide the preliminaries and framework of SAGIN, in which the characteristics of LEO satellites, high altitude platforms, as well as unmanned aerial vehicles are analyzed. Then, the roles and potentials of LEO satellite in SAGIN are analyzed for access services. A couple of advanced techniques such as multi-access edge computing (MEC) and network function virtualization are introduced to enhance the LEO-based access service abilities as hierarchical MEC and network slicing in SAGIN. In addition, corresponding use cases are provided to verify the propositions. Besides, we also discuss the open issues and promising directions in LEO-enabled SAGIN access services for the future research. Space-air-ground integrated network (SAGIN), low earth orbit (LEO) satellite, multi-access edge computing (MEC), network slicing. § INTRODUCTION §.§ Background and Motivation The sixth generation communication system (6G) related researches have gained great interests from both industries and academics, among which the space-air-ground integrated network (SAGIN) is a promising technique to extend the range of global services and worldwide Internet access <cit.>. Specifically, with the explosive growth of satellite businesses from various individuals and institutions, such as satellite navigation, emergency rescue, and deep space exploration, there exists significant demands to enhance the performance of traditional communication networks <cit.>, and SAGIN related techniques are promising to satisfy such requirements. Besides, in order to enable the region coverage and access quality, with the limitation of ground infrastructures, SAGIN enabled access and global coverage are significant. In particular, SAGIN is typically composed of three layers according to the altitude, i.e., space satellites, aerial vehicles, and terrestrial facilities, as the illustration in Fig. <ref>. Specifically, in the space, satellites are basically divided into three types based on different heights, i.e., geosynchronous earth orbit (GEO) satellites, medium earth orbit (MEO) satellites, and low earth orbit (LEO) satellites <cit.>. The high altitude platforms (HAPs) and unmanned air vehicles (UAVs) are popular facilities in the air <cit.>. HAPs are characterized for the stable and large coverage, while UAVs are specialized in flexible services <cit.>. Besides, there exists heterogenous resources such as sensors, transmitters, computation modules, caches, etc. Among which, LEO satellites play prominent roles in terms of networking and constellations, and we mostly focus on elaborating the roles and potentials of LEO satellites in the SAGIN access networks. However, there are multiple heterogenous users such as remote Internet of things (IoT), maritime voyages, and air transportation. Furthermore, the number of such users has a sharp increment in recent years. It is challenging to satisfy the ever-increasing requirements. However, the number of platforms in SAGIN, as well as the corresponding loading capacity such as hardwares, are limited. Besides, the resources in SAGIN are dynamic and the operational modes of various platforms are quite diverse. For example, LEO satellites in different orbits have different cycle periods, the movement of UAV is flexible but unpredictable, while HAPs are quasi-static. Also, the scarce intermittent communication resources caused by the periodic motion of satellites aggravate the resource competition. Consequently, how to figure out such challenges, i.e., heterogeneous platforms, various demands with diverse quality of service (QoS) requirements, and limited resources, are significant issues. As above, an elastic and scalable SAGIN should be well designed to satisfy the various demands. Besides, the potential of LEO satellites should be explored due to the increasing number of various LEO satellite constellation, such as Starlink <cit.>. The advanced in-network computing paradigms such as multi-access edge computing (MEC), network function virtualization (NFV), and software defined network (SDN), are introduced into the SAGIN for efficient and resilient resource providers <cit.>. SDN can assist manage SAGIN by the pattern of separating the control plane and data plane. NFV helps to provide multiple services for different users via decoupling the virtual network functions (VNFs) from physical platforms. In addition, a serious of VNFs for one task form a service function chain (SFC). The MEC paradigm can be utilized to help the remote users in SAGIN for computation task offloading <cit.>. In short, MEC, NFV, and SDN are prospective modes for flexible resource provider, and can help deal with multiple tasks with heterogeneous platforms and limited resources in SAGIN. §.§ State of the Art The SAGIN as well as LEO satellites, are widely studied recently. For example, in <cit.>, the authors focus on the data offloading problem in the space-air-ground network to guarantee the balance between energy consumption and mean time cost. <cit.> investigates the LEO satellite access networks by integrating with terrestrial network to realize seamless global communication services. In <cit.>, the authors investigate the distributed mobility management framework of the space-terrestrial networks, via reconfiguration of mobility management functions, to improve the handover decision efficiency. In <cit.>, the authors analyze the communication system of LEO satellites to investigate the application utilizing stochastic geometry. <cit.> investigates the LEO satellite constellation to improve the Internet connections for remote areas. <cit.> presents a multi-layer management structure composed of MEO and LEO satellites to achieve efficient mobility and resource control. As for the MEC and NFV in SAGIN, <cit.> designs a LEO satellite enabled heterogeneous MEC framework, as well as an offloading scheme to improve the computational performance. In <cit.>, a software defined LEO satellite framework as well as VNF deployment model are proposed, and efficient deployment algorithms are designed. <cit.> proposes a reconfigurable intelligent surfaces based MEC framework of space information networks to improve both the communication and computing capabilities. In <cit.>, the authors design a LEO satellite aided edge computing platform to guarantee the computing continuum. <cit.> proposes a framework of orbital edge computing with LEO satellite constellations, to satisfy the growing demand of multiple applications. <cit.> analyzes a serious of possible ways to combine machine learning and satellite networks to provide satellite based computing. However, from the perspective of LEO satellites, the multi-tier MEC for SAGIN, as well as the network slicing related researches have not been thoroughly investigated. Hence, in this work, we explore the potential of LEO satellite based access patterns in SAGIN, especially the heterogenous MEC and NFV-based network slicing. §.§ Contributions In this work, we provide the preliminaries and current development of SAGIN, and analyze the access patterns from the perspective of LEO satellites. Besides, advanced techniques such as MEC and NFV are introduced in the LEO satellites based access networks to promote the network ability and enhance the service quality, in terms of served users' number, service diversity, resource flexibility, etc. Both users and resource providers benefit from the new-type access technologies. In order to differentiate the general single-layer mobile computing service, we further present the heterogenous MEC framework as well as the SFC deployment technique. To clearly elaborate the performance, use cases are also provided. Moreover, we provide preliminary analyses on the open issues and possible directions for LEO satellite based access in SAGIN. Note that in this work, we mainly consider the remote users and non-terrestrial resources, instead of the abundant urban wireless communication facilities. The contributions of this work are summarized as follows: * The preliminaries of SAGIN are detailed, in which the LEO satellites based access patterns are presented, including the employment of advanced techniques such as MEC, NFV, caching, etc. * Based on LEO satellites, the hierarchical MEC as well as SFC implementation based network slicing for SAGIN are investigated. Besides, corresponding use cases are provided to validate the proposed schemes. * The LEO satellites based access possibilities in SAGIN as well as open challenges are analyzed, also with promising directions for the future researches. The rest of this work is organized as follows. We firstly provide an overview of SAGIN in Section <ref>, including the compositions, resources, demands, applications, etc. In Section <ref>, the role and potential of LEO satellites based access in SAGIN are elaborated, as well as the advances of in-network computing paradigms MEC and NFV based resource management paradigms, including validated use cases. The open issues and possible directions and are presented in Section <ref>. Finally, we draw the conclusion in Section <ref>. § BASICS OF SAGIN In this section, we provide the basics of SAGIN including LEO satellites. Specifically, as illustrated in Fig. <ref>, SAGIN is generally composed by the hierarchical satellites in the space, multiple vehicles in the air, and basic infrastructures on the ground. As for the multiple satellites in the space segment, the non-LEO satellites include GEO and MEO satellites. For example, the developed GEO/MEO satellites include MicroGEO <cit.>, Boeing, O3B, etc. The air vehicles include airships, HAPs, aircrafts, and UAVs, on the basis of different heights and functions. Besides, most users come from ground, such as emergency areas, ocean, deserts, and remote areas without services of ground base stations. Note that with the development of deep space exploration, e.g. lunar exploration and Mars exploration, the deep space probes are also potential users in SAGIN, since the explored data should be transmitted back to earth via multi-layer satellites. In addition, agents such as satellites, HAPs, and UAVs, act as resources in some cases, and as users in a couple of scenarios. Note that the meta resources include sensors, transceivers, computations, caching, etc., equipped on various platforms, to support multiple applications, such as deep space exploration, air transportation, ocean service, emergency rescue, remote area users, and navigation augmentation <cit.>. Hence, the resource management and efficient allocation are significant issues, to leverage limited resources to satisfy heterogeneous demands. The properties of satellites, HAPs, and UAVs are provided in Table <ref>. Specifically, from the view of height, satellites in the space operate above 160km, HAPs or airships in the air implement among 17km-22km, and UAVs in the air have a flight height among 600m-18km, depending on the detailed demands and platform property. Besides, satellites can work during many years, HAPs can last for months to years, and UAVs can continue working as minutes to hours. The coverage of satellites is large and periodic, HAPs can cover an medium and fixed area, while UAVs can flexibly cover a small area. From the perspective of energy provider, satellites utilize the solar panel, HAPs leverage the solar panels as well as battery to store energy at night, and lithium batteries are applied in UAVs. In addition, the controllers for satellites are from ground, HAPs can be controlled by the ground center or satellites, and the existing UAVs are operated by the ground controller. The typical applications of satellites include observation, communication, navigation, astronomy, meteorology, etc. HAPs can support applications such as real-time monitoring, communication relay, emergency recovery, and rocket launch platforms. UAVs are applied in the areas of sensing, communication, surveillance, farming and logistics. § LEO SATELLITE BASED ACCESS In this section, the characteristics of LEO satellite based access patterns in SAGIN are analyzed firstly, and further we introduce the network advances for enhancement, i.e. the hierarchical MEC as well as the network slicing techniques. Besides, correlated use cases are provided to verify the corresponding technique implementation. §.§ Properties of LEO Satellites LEO satellites attract more attentions from the world due to the networking cooperation via inter-satellite links and mega-constellation, such as Starlink of SpaceX and Kuiper of Amazon, to provide abundant applications for increasing demands from remote IoT, maritime voyage, etc. To make it clear, in Table <ref>, by revisiting the properties of multiple satellites, the potential of LEO satellites is discussed. In particular, the height of LEO satellites ranges from 500km to 2000km, the height of MEO satellites is among 2000km to 20,000km, and GEO satellites operate between 20,000km and 36,800km. Besides, the height larger than 36,800km is deemed as outer space. The orbit periods of LEO, MEO, and GEO satellites are 1.5h-2h, 2h-24h, and 24h, respectively. As for the launch cost, LEO satellites are the lowest. In addition, the round-trip delay of LEO satellites is 30ms-50ms, while it is 125ms-250ms and 400ms-600ms for the MEO and GEO satellites, respectively. Hence, compared with MEO and GEO satellites, LEO satellites can provide lower-delay services for the terrestrial users. Besides, LEO satellite constellations dramatically improve the global coverage, including the polar areas. However, the network complexity of LEO satellites is high, since many LEO satellites are networking, and serious in the mega-constellation, while the network complexity of GEO satellite is low since it can operate dependently to cover 1/3 globe. Indeed, users suffer from the frequent handover within the coverage of LEO satellites, while it is not even necessary to switch due to the large coverage and quasi-static property of GEO satellites. From the perspective of LEO satellites, users may come from deep space, space, air, and ground (including ocean). Moreover, LEO satellites act as multi-hop relays or servers. In short, the properties and advantages of LEO satellites reveal why most countries in the world pay close attentions to LEO network constructions. Accordingly, the challenges such as network complexity and frequent switch as discussed above should be addressed. §.§ Advances for Enhancement The emerged advances such as MEC, and NFV based network slicing techniques, enable LEO satellites access to flexibly accommodate more users, and efficiently leverage the heterogenous resources. Fig. <ref> provides a couple of access examples, from the perspective of LEO satellites, such as the LEO based direct access, LEO assisted UAV access, LEO assisted HAP access, etc. Besides, the advanced techniques such as MEC, cache, and NFV are also applied in the LEO satellites enabled access network to enhance performance. Note that other than the direct offloading from terrestrial users to LEO satellites in the example references <cit.>, the LEO satellites enabled multi-layer MEC mechanism is more applicable. Besides, the LEO based slicing in SAGIN is prospective, instead of independent LEO satellites based VNF deployment in Fig. <ref>. More specifically, MEC can be deemed as the node-level resource virtualization technique, while SFC implementation is the network-level virtualization technique, i.e., network slicing. Accordingly, the LEO satellite enabled hierarchical MEC as well as the network slicing are elaborated in the following. §.§.§ LEO Satellites Enabled Hierarchical MEC in SAGIN The MEC paradigm can provide efficient and effective mechanism to assist terrestrial users to deal with the computation demands. In SAGIN, satellites, HAPs, UAVs, as well as ground users, are equipped with different computation capability according to limited load capacity. The multi-layer MEC enables the computation continuum. As illustrated in Fig. <ref>, the terrestrial terminal (user 1) is equipped with limited computation server capability, so the task is partly computed locally by user 1, and partly offloaded to the LEO satellite for online computing. As for terrestrial user 2 and user 3, the task data are offloaded to the nearby UAVs, and the computation task of user 2 is partially completed by the UAV. However, due to the limited loading capacity of UAVs, a portion of data of user 2 is offloaded to the LEO satellite for further computation. The data from user 3 is offloaded the satellite relayed by a UAV, while the data from user 4 is transmitted by a UAV to the HAP for computation. Note that the data from user 2 and user 3 share the computation resources of the same LEO satellite. In particular, according to <cit.>, there exists the following computation offloading modes: binary and partial. The binary mode means the data is fully offloaded or completed locally, as the data offloading mode of user 3 and user 4 in Fig. <ref>. The partial mode means the cooperation of different platforms to complete the same computation task, such as the service procedures of user 1 and user 2 in Fig. <ref>. Indeed, the detailed offloading decision policy should be determined according to optimization objective and considered metrics such as limited energy supply, communication resource limitation, delay QoS, etc. The general methods such as game theory, convex optimization, and machine learning are available. §.§.§ Numerical example We explore the performance of different MEC paradigms in Fig. <ref>. Specifically, the scenario is set with four LEO satellites with orbit height of 1000km, four UAVs are uniformly distributed in a fixed area with altitude of 2km, and 20-120 users are within the coverage. The hierarchical MEC is implemented by the cooperation of UAVs and LEO satellites, while the compared methods are single-layer LEO satellites and single-layer UAVs based MEC. From Fig. <ref>, it is observed that with respect to the number of served users, the hierarchical MEC is outstanding compared to the single-layer MEC, and the result is in accordance with intuitive understanding. Besides, the LEO-based MEC performs better that the UAV-based MEC due to the larger coverage and stronger computing load capacity of LEO satellites. §.§.§ LEO Satellites Based Network Slicing in SAGIN Recently, due to the advantages such as programmability and virtualization introduced by NFV and SDN, the related network slicing techniques are introduced into satellite networks, aiming at cost-effective resource utilization and high QoS performance, as well as lowering the capital and operation expenditures <cit.>. The basics depend on the advanced reconfigurable resources of both software and hardware. SDN is equipped with the feature of data and control separation, and operating in a centralized management mode. The specific implementation is utilizing NFV technique to provide multiple services for different users via decoupling the VNFs from physical platforms. In addition, a series of VNFs for one task form a SFC. Moreover, the network slicing can also be expanded in the SAGIN, instead of only satellite networks. Such characteristics facilitate an elastic SAGIN, and the SFC deployment in the SAGIN is a significant mode for efficient resource implementation. For clarity, in Fig. <ref>, a case of network slicing scenario via SFC deployment in SAGIN is presented. In particular, LEO satellites in the space and the aerial vehicles can provide flexible services leveraging SDN and NFV techniques, and the network slicing can be realized via SFC deployment in the SAGIN. In detail, there exists three different SFC requirements in Fig. <ref>: the SFC of user 1 includes only one VNF and it is deployed on a LEO satellite, and the UAV serves as a relay. The SFC corresponding to user 2 is composed of three VNFs, which are sequently deployed on a UAV, a HAP, and a LEO satellite, respectively. The SFC from the ocean user 3 includes two VNFs, which are respectively mapped on a UAV and a LEO satellite. Note that the SFCs of user 1 and user 2 share the resources of the same LEO satellite. The network slicing in SAGIN improves the resource efficiency, for example, increasing stable coverage, reducing latency, etc. Fundamentally, the related problem should consider SFC deployment optimization, multiple resource limitation, the coupled restriction between VNF mapping and routing selection, as well as time horizon related scheduling should be handled. §.§.§ Numerical example To clearly verify the performance of LEO satellites enabled network slicing in SAGIN, the numerical results are conducted in a simple use case and Fig. <ref> shows the results. Specifically, the scenario is set within a small scale Walker constellation composed of sixteen LEO satellites with orbit height of 1000km, six UAVs are uniformly distributed in a fixed area with altitude of 2km, and 10-100 tasks with SFC requirements. In Fig. <ref>, different paradigms including flexible deployment, random deployment, and fixed deployment are compared. It is obviously that the flexible SFC deployment performs best, while the random deployment leads to multiple conflicts and resource inefficiency, which typically results in SFC failure. As for the fixed deployment, it means NFV is not supported in such a strategy, and VNFs should be deployed on specified platforms, so the SFC request is liable to fail if there is no active communication links. § OPEN ISSUES AND DIRECTIONS Although LEO satellites open up significant potentials in the SAGIN access networks, there still exist a couple of open issues, such as large number of LEO satellites in the mega-constellation, high dynamics leading to frequent handover, on-board processing and control, information safety, etc. For clarity, we summarize such issues in Table <ref>. To tackle these challenges, we provide the following possible directions and tips for future researches. * As for the large number of LEO satellites in the mega-constellation, the opportunities rely on the co-design of various LEO satellite constellations, and the future directions may advocate increasing the interoperability among different LEO satellites from different companies or countries via protocol design. * In order to deal with the high dynamics of LEO satellites, and the related frequent handover issues of users, the cooperation of multiple platforms can guarantee continuum to some extent. Besides, the future directions may depend on the mobility management strategy design, efficient handover decisions, and developing the distributed manage system. * To figure out the challenge of on-board processing and control, typically the intelligent and machine learning based techniques can be employed. In particular, the online learning mechanism design as well as the distributed control paradigms may be promising approaches for future directions. * Due to the global service and openness of SAGIN, especially LEO satellite networks, the information safety correspondingly becomes intractable issues. Associated with the opportunity of privacy protection, the emerged popular techniques such as differential privacy as well as blockchain can be considered as future directions for SAGIN safety. Essentially, the open challenges of the LEO satellites access in SAGIN are not only in terms of the discussed issues as above, the orbit scarcity, frequency competition, and resource ability limitation are all issues should be focused, and available strategies should be devised. § CONCLUSIONS In this work, we have reviewed the basics and details of LEO satellites based access modes and techniques in SAGIN. Firstly we have elaborated the SAGIN preliminaries and current development, as well as the possible applications. Then, the potentials of LEO access patterns, including the direct access, multi-hop, relay, caching, MEC, and NFV implementation have been elaborated. Besides, based on the recent advanced techniques MEC and NFV, the multi-layer MEC as well as the SFC deployment network slicing in SAGIN have been detailed, and corresponding use cases have been verified. Further, the open challenges and possible directions have also predicted and analyzed. We expect this study can promote the development of LEO multiple access possibilities in SAGIN. 10 1263-xiaozhenyu-JSAC2022 Z. Xiao, Z. Han, A. Nallanathan, O. A. Dobre, B. Clerckx, J. Choi, C. He, and W. Tong, “Antenna array enabled space/air/ground communications and networking for 6G,” IEEE J. Sel. Areas Commun., vol. 40, no. 10, pp. 2773–2804, Aug. 2022. jzy-IoT-bender Z. Jia, M. Sheng, J. Li, and Z. Han, “Towards data collection and transmission in 6G space-air-ground integrated networks: Cooperative HAP and LEO satellite schemes,” IEEE IoT J., vol. 9, no. 13, pp. 10 516–10 528, Oct. 2021. 1264-xiaozhenyu-CC2022 H. Cui, J. Zhang, Y. Geng, Z. Xiao, T. Sun, N. Zhang, J. Liu, Q. Wu, and X. Cao, “Space-air-ground integrated network (SAGIN) for 6G: Requirements, architecture and challenges,” China Commun., vol. 19, no. 2, pp. 90–108, Feb. 2022. 1268-helijun-TMC2L. He, J. Li, Y. Wang, J. Zheng, and L. He, “Balancing total energy consumption and mean makespan in data offloading for space-air-ground integrated networks,” IEEE Trans. Mob. Comput., pp. 1–14, Nov. 2022. 1260-xiaozhenyu-WC Z. Xiao, J. Yang, T. Mao, C. Xu, R. Zhang, Z. Han, and X.-G. Xia, “LEO satellite access network (LEO-SAN) towards 6G: Challenges and approaches,” IEEE Wireless Commun., pp. 1–8, Dec. 2022. 1273-xuelin-JSAC-LEO X. Cao, B. Yang, Y. Shen, C. Yuen, Y. Zhang, Z. Han, H. Vincent Poor, and L. Hanzo, “Edge-assisted multi-layer offloading optimization of LEO satellite-terrestrial integrated networks,” IEEE J. Sel. Areas Commun., pp. 1–1, Dec. 2022. ziyejia-jsac Z. Jia, M. Sheng, J. Li, D. Zhou, and Z. Han, “Joint HAP access and LEO satellite backhaul in 6G: Matching game-based approaches,” IEEE J. Sel. Areas Commun., vol. 39, no. 4, pp. 1147–1159, Apr. 2021. 1020-dongchao-mag-uav C. Dong, Y. Shen, Y. Qu, K. Wang, J. Zheng, Q. Wu, and F. Wu, “UAVs as an intelligent service: Boosting edge intelligence for air-ground integrated networks,” IEEE Network, vol. 35, no. 4, pp. 167–175, Aug. 2021. 990-HAP-mag2021 M. S. Alam, G. K. Kurt, H. Yanikomeroglu, P. Zhu, and N. D. D o, “High altitude platform station based super macro base station constellations,” IEEE Commun. Mag., vol. 59, no. 1, pp. 103–109, Feb. 2021. 1276-role-LEO T. Ahmmed, A. Alidadi, Z. Zhang, A. U. Chaudhry, and H. Yanikomeroglu, “The digital divide in Canada and the role of LEO satellites in bridging the gap,” IEEE Commun. Mag., vol. 60, no. 6, pp. 24–30, Jun. 2022. JZY-TWC Z. Jia, M. Sheng, J. Li, D. Zhou, and Z. Han, “VNF-based service provision in software defined LEO satellite networks,” IEEE Trans. Wireless Commun., vol. 20, no. 9, pp. 6139–6153, Sep. 2021. 350-SapceIOTml-chengnanjsac N. Cheng, F. Lyu, W. Quan, C. Zhou, H. He, W. Shi, and X. Shen, “Space/Aerial-Assisted Computing Offloading for IoT Applications: A Learning-Based Approach,” IEEE J. Sel. Areas Commun., vol. 37, no. 5, pp. 1117–1129, May 2019. 436-ultra-dense-LEO B. Di, L. Song, Y. Li, and H. V. Poor, “Ultra-dense LEO: Integration of satellite access networks into 5G and beyond,” IEEE Wireless Commun., vol. 26, no. 2, pp. 62–69, Apr. 2019. 1269-jisijing-Mag S. Ji, M. Sheng, D. Zhou, W. Bai, Q. Cao, and J. Li, “Flexible and distributed mobility management for integrated terrestrial-satellite networks: Challenges, architectures, and approaches,” IEEE Network, vol. 35, no. 4, pp. 73–81, Jul. 2021. 1271-ultra-dense-LEO-mag R. Wang, M. A. Kishk, and M.-S. Alouini, “Ultra-dense LEO satellite-based communication systems: A novel modeling technique,” IEEE Commun. Mag., vol. 60, no. 4, pp. 25–31, Apr. 2022. 1278-LEO-mag-zhouhaibo T. Ma, B. Qian, X. Qin, X. Liu, H. Zhou, and L. Zhao, “Satellite-terrestrial integrated 6G: An ultra-dense LEO networking management architecture,” IEEE Wireless Commun., pp. 1–8, Dec. 2022. 1259-caoxuelin-SIN-Mag X. Cao, B. Yang, C. Huang, C. Yuen, Y. Zhang, D. Niyato, and Z. Han, “Converged reconfigurable intelligent surface and mobile edge computing for space information networks,” IEEE Network, vol. 35, no. 4, pp. 42–48, Aug. 2021. 1267-megaLEO-CM P. Cassará, A. Gotta, M. Marchese, and F. Patrone, “Orbital edge offloading on mega-LEO satellite constellations for equal access to computing,” IEEE Commun. Mag., vol. 60, no. 4, pp. 32–36, Apr. 2022. 1274-LEO-MEC Y. Zhang, C. Chen, L. Liu, D. Lan, H. Jiang, and S. Wan, “Aerial edge computing on orbit: A task offloading and allocation scheme,” IEEE Trans. Network Sci. Eng., pp. 1–11, Sep. 2022. 1275-sat-computing-FL H. Chen, M. Xiao, and Z. Pang, “Satellite-based computing networks with federated learning,” IEEE Wireless Commun., vol. 29, no. 1, pp. 78–84, Feb. 2022. helijun-TMC1 L. He, B. Liang, J. Li, and M. Sheng, “Joint observation and transmission scheduling in agile satellite networks,” IEEE Trans. Mob. Comput., vol. 21, no. 12, pp. 4381–4396, Dec. 2022. 1288-xiaozhenyu-HAPmag Z. Xiao, T. Mao, Z. Han, and X.-G. Xia, “Near space communications: A new regime in space-air-ground integrated networks,” IEEE Wireless Commun., vol. 29, no. 6, pp. 38–45, Dec. 2022. jzy-IoT-aerialcomputing Z. Jia, Q. Wu, C. Dong, C. Yuen, and Z. Han, “Hierarchical aerial computing for Internet of Things via cooperation of HAPs and UAVs,” IEEE IoT J., early access, Feb. 2022. 1285_LEO_navigation Z. Kassas, J. Morton, F. van Diggelen, J. Spilker, and B. Parkinson, “Navigation from low earth orbit-part 2: Models implementation and performance,” in Position, Navigation, and Timing Technologies in the 21st Century. Wiley, 2021, vol. 2, pp. 1381–1412. 1277-LEO-mag-positioning R. M. Ferre, E. S. Lohan, H. Kuusniemi, J. Praks, S. Kaasalainen, C. Pinell, and M. Elsanhoury, “Is LEO-based positioning with mega- constellations the answer for future equal access localization?” IEEE Commun. Mag., vol. 60, no. 6, pp. 40–46, Jun. 2022.
http://arxiv.org/abs/2307.01951v1
20230704230321
A Neural Collapse Perspective on Feature Evolution in Graph Neural Networks
[ "Vignesh Kothapalli", "Tom Tirer", "Joan Bruna" ]
cs.LG
[ "cs.LG", "cs.AI", "cs.IT", "math.IT", "math.OC", "stat.ML" ]
Instantaneous Wireless Robotic Node Localization Using Collaborative Direction of Arrival Ehsan Latif and Ramviyas Parasuraman^* School of Computing, University of Georgia, Athens, GA 30602, USA. ^* Corresponding Author Email: [email protected]. August 1, 2023 ================================================================================================================================================================= Graph neural networks (GNNs) have become increasingly popular for classification tasks on graph-structured data. Yet, the interplay between graph topology and feature evolution in GNNs is not well understood. In this paper, we focus on node-wise classification, illustrated with community detection on stochastic block model graphs, and explore the feature evolution through the lens of the “Neural Collapse” (NC) phenomenon. When training instance-wise deep classifiers (e.g. for image classification) beyond the zero training error point, NC demonstrates a reduction in the deepest features' within-class variability and an increased alignment of their class means to certain symmetric structures. We start with an empirical study that shows that a decrease in within-class variability is also prevalent in the node-wise classification setting, however, not to the extent observed in the instance-wise case. Then, we theoretically study this distinction. Specifically, we show that even an “optimistic” mathematical model requires that the graphs obey a strict structural condition in order to possess a minimizer with exact collapse. Interestingly, this condition is viable also for heterophilic graphs and relates to recent empirical studies on settings with improved GNNs' generalization. Furthermore, by studying the gradient dynamics of the theoretical model, we provide reasoning for the partial collapse observed empirically. Finally, we present a study on the evolution of within- and between-class feature variability across layers of a well-trained GNN and contrast the behavior with spectral methods. § INTRODUCTION Graph neural networks <cit.> employ message-passing mechanisms to capture intricate topological relationships in data and have become de-facto standard architectures to handle data with non-Euclidean geometric structure <cit.>. However, the influence of topological information on feature learning in GNNs is yet to be fully understood <cit.>. In this paper, we study the feature evolution in GNNs in a node-wise supervised classification setting. In order to gain insights into the role of topology, we focus on the controlled environment of the prominent stochastic block model (SBM) <cit.>. The SBM provides an effective framework to control the level of sparsity, homophily, and heterophily in the random graphs and facilitates analysis of GNN which relies solely on structural information <cit.>. While inductive supervised learning on graphs is a relatively more difficult problem than transductive learning, it aligns with practical scenarios where nodes need to be classified in unseen graphs <cit.>, and is also amenable to training GNNs that are deeper than conventional shallow Graph Convolution Network (GCN) models <cit.>. The empirical and theoretical study of GNNs' feature evolution in this paper employs a “Neural Collapse” perspective <cit.>. When training Deep Neural Networks (DNNs) for classification, it is common to continue optimizing the networks' parameters beyond the zero training error point <cit.>, a stage that was referred to in <cit.> as the “terminal phase of training” (TPT). Papyan, Han, and Donoho <cit.> have empirically shown that a phenomenon, dubbed Neural Collapse (NC), occurs during the TPT of plain DNNs[Throughout the paper, by (plain) DNNs we mean networks that output an instance-wise prediction (e.g., image class rather than pixel class), while by GNNs we mean networks that output node-wise predictions.] on standard instance-wise classification datasets. NC encompasses several simultaneous properties: (NC1) The within-class variability of the deepest features decreases (i.e., outputs of the penultimate layer for training samples from the same class tend to their mean); (NC2) After subtracting their global mean, the mean features of different classes become closer to a geometrical structure known as a simplex equiangular tight frame; (NC3) The last layer's weights exhibit alignment with the classes' mean features. A consequence of NC1-3 is that the classifier’s decision rule becomes similar to the nearest class center in the feature space. We refer to <cit.> for a review on this topic. The common approach to theoretically study the NC phenomenon is the “Unconstrained Features Model” (UFM) <cit.>. The core idea behind this “optimistic” mathematical model is that the deepest features are considered to be freely optimizable. This idea has facilitated a recent surge of theoretical works in an effort to understand the global optimality conditions and gradient dynamics of these features and the last layer's weights in DNNs <cit.>. In our work, we extend NC analysis to settings where relational information in data is paramount, and creates a tension with the `freeness' associated with the UFM model. In essence, we highlight the key differences when analyzing NC in GNNs by identifying structural conditions on the graphs under which the global minimizers of the training objective exhibit full NC1. Interestingly, the structural conditions that we rigorously establish in this paper are aligned with the neighborhood conditions on heterophilic graphs that have been empirically hypothesized to facilitate learning by <cit.>. Our main contributions can be summarized as follows: -0.05em * We conduct an extensive empirical study that shows that a decrease in within-class variability is prevalent also in the deepest features of GNNs trained for node classification on SBMs. However, not to the extent observed in the instance-wise setting. * We propose and analyze a graph-based UFM to understand the role of node neighborhood patterns and their community labels on NC dynamics. We prove that even this optimistic model requires a strict structural condition on the graphs in order to possess a minimizer with exact variability collapse. Then, we show that satisfying this condition is a rare event, which theoretically justifies the distinction between observations for GNNs and plain DNNs. * Nevertheless, by studying the gradient dynamics of the graph-based UFM, we provide theoretical reasoning for the partial collapse during GNNs training. * Finally, we study the evolution of features across the layers of well-trained GNNs and contrast the decrease in NC1 metrics along depth with a NC1 decrease along power iterations in spectral clustering methods. § PRELIMINARIES AND PROBLEM SETUP We focus on supervised learning on graphs for inductive community detection. Formally, we consider a collection of K undirected graphs {_k = (_k, _k) }_k=1^K, each with N nodes, C non-overlapping balanced communities and a node labelling ground truth function y_k: _k →{_1, …, _C }. Here, ∀ c ∈ [C], _c ∈^C indicates the standard basis vector, where we use the notation [C] = { 1, ⋯, C}. The goal is to learn a parameterized GNN model ψ_Θ(.) which minimizes the empirical risk given by: min_Θ1/K∑_k=1^K ℒ(ψ_Θ(_k), y_k) + λ/2Θ_F^2, where ·_F represents the Frobenius norm, ℒ is the loss function that is invariant to label permutations <cit.>, and λ > 0 is the penalty parameter. We choose based on the mean squared error (MSE) as: ℒ(ψ_Θ(_k), y_k) = min_π∈ S_C1/2Nψ_Θ(_k) - π(y_k (_k) ) _2^2, where π belongs to the permutation group over C elements. Using the MSE loss for training DNN classifiers has become increasingly popular recently. For example, <cit.> have performed an extensive empirical study that shows that training with MSE loss yields performance that is similar to (and sometimes even better than) training with CE loss. This choice also facilitates theoretical analyses <cit.>. §.§ Data model We employ the Symmetric Stochastic Block Model (SSBM) to generate graphs {_k = (_k, _k) }_k=1^K. Stochastic block models (originated in <cit.>) are classical random graph models that have been extensively studied in statistics, physics, and computer science. In the SSBM model that is considered in this paper, each graph _k is associated with an adjacency matrix _k ∈^N × N, degree matrix _k = diag(_k) ∈^N × N, and a random node features matrix _k ∈^d × N, with entries sampled from a normal distribution. Formally, if ∈^C × C represents a symmetric matrix with diagonal entries p and off-diagonal entries q, a random graph _k is considered to be drawn from the distribution SSBM(N, C, p, q) if an edge between vertices v_i, v_j is formed with probability ()_y_k(v_i), y_k(v_j). We choose the regime of exact recovery <cit.> in sparse graphs where p = a ln(N)/N, q = b ln(N)/N for parameters a, b ≥ 0 such that |√(a) - √(b)| > √(C). The need for exact recovery (information-theoretically) stems from the requirement that ψ_Θ should be able to reach TPT. §.§ Graph neural networks Inspired by the widely studied model of higher-order GNNs by <cit.>, we design ψ_Θ based on a family of graph operators = {, _k}, ∀ k ∈ [K], and denote it as ψ_Θ^. Formally, for a GNN ψ_Θ^ with L layers, the node features _k^(l)∈^d_l× N at layer l ∈ [L] is given by: _k^(l) = _1^(l)_k^(l-1) + _2^(l)_k^(l-1)_k, _k^(l) = σ(_k^(l)), where _k^(0) = _k, and σ(·) represents a point-wise activation function such as ReLU. _1^(l), _2^(l)∈^d_l× d_l-1 are the weight matrices and _k = _k_k^-1 is the normalized adjacency matrix, also known as the random-walk matrix. We also consider a simpler family without the identity operator ' = {_k }, ∀ k ∈ [K] and analyze the GNN ψ_Θ^' with only graph convolution functionality. Formally, the node features _k^(l)∈^d_l× N for ψ_Θ^' is given by: _k^(l) = _2^(l)_k^(l-1)_k, _k^(l) = σ(_k^(l)). Here, the subscript for the weight matrix _2^(l) is retained to highlight that it acts on _k^(l-1)_k. Finally, we employ the training strategy of <cit.> and apply instance-normalization <cit.> on σ(_k^(l)), ∀ l ∈{1, ⋯, L-1} to prevent training instability. §.§ Tracking neural collapse in GNNs In our setup, reaching zero training error (TPT) implies that the network perfectly classifies all the nodes (up to label permutations) in all the training graphs. To this end, we leverage the NC metrics introduced in <cit.> and extend them to GNNs in an inductive setting. To begin with, let us consider a single graph _k=(_k, _k), k ∈ [K] with a normalized adjacency matrix _k. Additionally, we denote _k^(l)∈^d_l× N as the output of layer l ∈ [L-1], irrespective of the GNN design. Now, by dropping the subscript and superscript for notational convenience, we define the class means and the global mean of as follows: _c := 1/n∑_i=1^n_c, i,∀ c ∈ [C], _G := 1/Cn∑_c=1^C∑_i=1^n_c,i, where n=N/C represents the number of nodes in each of the C balanced communities, and _c, i is the feature vector (a column in ) associated with v_c, i∈, i.e., the i^th node belonging to class c ∈ [C]. Next, let (v_c,i) denote all the neighbors of v_c, i and let _c'(v_c,i) denote only the neighbors of v_c,i that belong to class c' ∈ [C]. We define the class means and global mean of , which is unique to the GNN setting as follows: ^_c := 1/n∑_i=1^n^_c, i,∀ c ∈ [C], ^_G := 1/Cn∑_c=1^C∑_i=1^n^_c,i, where _c,i^ = ( ∑_𝒩_c(v_c,i)_c,j + ∑_𝒩_c' c(v_c,i)_c',j ) / |𝒩(v_c,i)|. ∙ Variability collapse in features : For a given features matrix , let us define the within- and between-class covariance matrices, _W() and _B(), as: _W() := 1/Cn∑_c=1^C∑_i=1^n ( _c, i - _c )( _c, i - _c )^⊤, _B() := 1/C∑_c=1^C ( _c - _G )( _c - _G )^⊤. To empirically track the within-class variability collapse with respect to the between-class variability, we define two NC1 metrics: _1() = 1/CTr(_W()_B^†()) , _1() = Tr(_W())/Tr(_B()), where ^† denotes the Moore-Penrose pseudo-inverse and Tr(·) denotes the trace of a matrix. Although _1 is the original NC1 metric used by <cit.>, we consider also _1, which has been proposed by <cit.> as an alternative metric that is more amenable to theoretical analysis. ∙ Variability collapse in neighborhood-aggregated features : Similarly to the above, we track the within- and between-class variability of the “neighborhood-aggregated” features matrix by _W() and _B() (computed using ^_c and ^_G), as well as _1() and _1(). (See Appendix <ref> for formal definitions.) Finally, we follow a simple approach and track the mean and variance of _1(), _1(), _1(), _1() across all K graphs in our experiments. As the primary focus of our paper is the analysis of feature variability during training and inference, we defer the definition and examination of metrics based on NC2 and NC3 to Appendix <ref>, <ref>. § EVOLUTION OF PENULTIMATE LAYER FEATURES DURING TRAINING In this section, we explore the evolution of the deepest features of GNNs during training. In Section <ref>, we present empirical results of GNNs in the setup that is detailed in Section <ref>, showing that a decrease in within-class feature variability is present in GNNs that reach zero training error, but not to the extent observed with plain DNNs. Then, in Section <ref>, we theoretically study a mathematical model that provides reasoning for the empirical observations. §.§ Experiments Setup. We focus on the training performance of GNNs ψ_Θ^, ψ_Θ^' on sparse graphs and generate a dataset of K=1000 random SSBM graphs with C=2, N=1000, p=0.025, q=0.0017. The networks ψ_Θ^, ψ_Θ^' are composed of L=32 layers with graph operator, ReLU activation, and instance-normalization functionality. The hidden feature dimension is set to 8 across layers. They are trained for 8 epochs using stochastic gradient descent (SGD) with a learning rate 0.004, momentum 0.9, and a weight decay of 5 × 10^-4. During training, we track the NC1 metrics for the penultimate layer features _k^(L-1), by computing their mean and standard deviation across k ∈ [K] graphs after every epoch. To measure the performance of the GNN, we compute the `overlap' <cit.> between predicted communities and ground truth communities (up to permutations): overlap(ŷ, y) := max_π∈ S_C(1/N∑_i=1^N δ_ŷ(v_i), π(y(v_i)) - 1/C)/(1 - 1/C) where ŷ is the node labelling function based on GNN design and 1/N∑_i=1^N δ_ŷ(v_i), π(y(v_i)) is the training accuracy (δ denotes the Kronecker delta). The overlap allows us to measure the improvements in performance over random guessing while retaining the indication that the GNN has reached TPT. Formally, when 1/N∑_i=1^N δ_ŷ(v_i), π(y(v_i))=1 (zero training error), then overlap(ŷ, y)=1. We illustrate the empirical results in Figures <ref> and <ref>, and present extensive experiments (showing similar behavior) along with infrastructure details in Appendix <ref>[Code is available at: https://github.com/kvignesh1420/gnn_collapsehttps://github.com/kvignesh1420/gnn_collapse]. Observation: The key takeaway is that _1(^(L-1)_k), _1(^(L-1)_k) tend to reduce and plateau during TPT in ψ_Θ^ and ψ_Θ^'. Notice that even though we consider a controlled SSBM-based setting, the _1 values observed here are higher than the values observed in the case of plain DNNs on real-world instance-wise datasets <cit.>. Additionally, we can observe that trends for _1(^(L-1)_k_k), _1(^(L-1)_k_k) are similar to those of _1(^(L-1)_k), _1(^(L-1)_k). §.§ Theoretical analysis In this section, we provide a theory for this empirical behavior. Most, if not all, of the theoretical papers on NC, adopt the UFM approach, which treats the features as free optimization variables – disconnected from data <cit.>. Here, we consider a graph-based adaptation of this approach, that we dubbed as gUFM. We consider GNNs of the form of ψ_Θ^', which is more tractable for mathematical analysis. Formally, by considering ℒ to be the MSE loss, treating {_k^(L-1)}_k=1^K as freely optimizable variables, and representing _2^(L)∈^C × d_L-1, _k^(L-1)∈^d_L-1× N as _2, _k (for notational convenience), the empirical risk based on the gUFM can be formulated as follows: ^'(_2, {_k}_k=1^K) := 1/K∑_k=1^K ( 1/2N_2_k_k - _F^2 + λ_H_k/2_k_F^2 ) + λ_W_2/2_2_F^2 where ∈^C × N is the target matrix, which is composed of one-hot vectors associated with the different classes, and λ_W_2, λ_H_k > 0 are regularization hyperparameters. To simplify the analysis, let us assume that = _C ⊗_n^⊤, where ⊗ denotes the Kronecker product. Namely, the training data is balanced (a common assumption in UFM-based analyses in literature) with n=N/C nodes per class in each graph and (without loss of generality) organized class-by-class. Note that for K=1 (which allows omitting the graph index k) and no graphical structure, i.e., = (since =), (<ref>) reduces to the plain UFM that has been studied in <cit.>. In this case, it has been shown that any minimizer (_2^*,^*) is collapsed, i.e., its features have exactly zero within-class variability: _c, 1^* = …=_c, n^* = ^*_c, ∀ c ∈ [C], which implies _W(^*)=0. We will show now that the situation in gUFM is significantly different. Considering the K=1 case, we start by showing that, to have minimizers of (<ref>) that possess the property in (<ref>), the graph must obey a strict structural condition. For K>1, having a minimizer (_2^*, {_k^*}) where, for some j ∈ [K], _j^* is collapsed directly follows from having the structural condition satisfied by the j-th graph (as shown in our proof, the sufficiency of the condition does not depend on the shared weights _2). On the other hand, generalizing the necessity of the structural condition to the case of K>1 is technically challenging (see the appendix for details). For that reason, we state the condition in the following theorem only for K=1. Note also that, showing that the condition is unlikely to be satisfied per graph is enough for explaining the plateaus above zero of NC metrics (computed over multiple graphs), which are demonstrated in Section <ref>. Consider the gUFM in (<ref>) with K=1 and denote the fraction of neighbors of node v_c,i that belong to class c' as s_cc',i = |𝒩_c'(v_c,i)|/|𝒩(v_c,i)|. Let the condition C based on s_cc',i be given by: (s_c1,1, ⋯, s_cC,1) = ⋯ = (s_c1, n, ⋯, s_cC, n ), ∀ c ∈ [C]. C If a graph satisfies condition C, then there exist minimizers of the gUFM that are collapsed (satisfying (<ref>)). Conversely, when either √(λ_Hλ_W_2) = 0, or √(λ_Hλ_W_2) > 0 and G is regular (so that = ^⊤), if there exists a collapsed non-degenerate minimizer[Non-degenerate minimizers in the sense that _2^*^* ∈ℝ^C × N is full-rank. This eliminates degenerate `zero'-solutions which are obtained when the regularization hyper-parameters are large.] of gUFM, then condition C necessarily holds. Remark: The proof is presented in Appendix <ref>. The symmetry assumption on (which implies that is a regular graph) in the second part of the theorem has been made to pass technical obstacles in the proof rather than due to a true limitation. Thus, together with the results of our experiments (where no symmetry is enforced), we believe that this assumption can be dropped. Accordingly, we state the following conjecture. Consider the gUFM in (<ref>) with K=1 and condition C as stated in theorem <ref>. The minimizers of the gUFM are collapsed (satisfying (<ref>)) iff the graph satisfies condition C. Let us dwell on the implication of Theorem <ref>. The stated condition C essentially holds when any node i ∈ [n] of a certain class c obeys (s_c1,i, ⋯, s_cC,i) = (s_c1, ⋯, s_cC ) for some (s_c1, ⋯, s_cC ), a tuple of the ratio of neighbors (∑_c'=1^C s_cc'=1) independent of i. That is, (s_c1, ⋯, s_cC ) must be the same for nodes within the same class but can be different for nodes belonging to different classes. For example, for a plain UFM this condition trivially holds, as =. Under the SSBM distribution, it is also easy to see that 𝔼 satisfies this condition. However, for more practical graphs, such as those drawn from SSBM, the probability of having a graph that obeys condition C is negligible. This is shown in the following theorem. Let =(, ) be drawn from SSBM(N, C, p, q). For N >> C, we have ( obeys C) < ( ∑_t=0^n [n tq^t(1-q)^n - t]^n)^C(C-1)/2( ∑_t=0^n [n tp^t(1-p)^n - t]^n )^C. The proof is presented in Appendix <ref>. It is not hard to see that as the number of per-class nodes n increases, the probability of satisfying condition C decreases,[Each term in each of the sums is the n^th power of a number smaller than 1 (a binomial probability).] as numerically exemplified below. Numerical example. Let's consider a setting with C=2, N = 1000, a=3.75, b=0.25. This gives us n=N/C=500, p= 0.025, q= 0.0017, for which ( obeys C ) < 1.7 × 10^-1140. In Appendix <ref> we further show by exhaustive computation of ( obeys C) that its value is negligible even for smaller scale graphs. Thus, the probability of sampling a graph structure for which the gUFM minimizers exhibit exact collapse is practically 0. gUFM experiments. For a better understanding of these results, we present small-scale experiments using the gUFM model on graphs that satisfy and do not satisfy condition C. By training the gUFM (based on ψ_Θ^') on K=10 graphs that satisfy condition C, we can observe from Figure <ref> that NC1 metrics on , reduce significantly. On the other hand, these metrics plateau after sufficient reduction when the graphs fail to satisfy condition C, as shown in Figure <ref>. In both the cases, the SSBM parameters are C=2, N = 1000, p=0.025, q=0.0017, and the gUFM is trained using plain gradient descent for 50000 epochs with a learning rate of 0.1 and L2 regularization parameters λ_W_1 = λ_W_2 = λ_H = 5× 10^-3. Extensive experiments with varying choices of N, C, p, q, feature transformation based on ψ_Θ^ and additional NC metrics are provided in Appendix <ref>. Remark. Note that previous papers consider UFM configurations for which the minimizers possess exact NC, typically without any condition on the number of samples or on the hyperparameters of the settings. As the UFMs are “optimistic” models, in the sense that they ignore all the limitations on modifying the features that exist in the training of practical DNNs, such results can be understood as “zero-order” reasoning for practical NC behavior. On the other hand, here we show that even the optimistic gUFM will not yield perfectly collapsed minimizers for graph structures that are not rare. This provides a purer understanding of the gaps in GNNs' features from exact collapse and why these gaps are larger than for plain DNNs. We also highlight the observation that condition C applies to homophilic as well as heterophilic graphs, as the constraint on neighborhood ratios is independent of label similarity. Thus providing insights on the effectiveness of GNNs on highly heterophilic graphs as empirically observed by <cit.>. Gradient flow: By now, we have provided a theory for the distinction between the deepest features of GNNs and plain DNNs. Next, to provide reasoning for the partial collapse in GNNs, which is observed empirically, we turn to study the gradient dynamics of our gUFM. We consider the K=1 case and, following the common practice <cit.>, analyze the gradient flow along the “central path” — i.e., when _2=_2^*() is the optimal minimizer of ^'(_2,) w.r.t. _2, which has a closed-form expression as a function of . The resulting gradient flow is: d_t/dt = - ∇^'(_2^*(_t),_t). Similarly to <cit.>, we aim to gain insights on the evolution of _W(_t) and _B(_t) (in particular, their traces) along this flow. Yet, the presence of the structure matrix significantly complicates the analysis compared to existing works (which are essentially restricted to =). Accordingly, we focus on the case of two classes, C=2, and adopt a perturbation approach, analyzing the flow for a graph = 𝔼 +, where the expectation is taken with respect to the SSBM distribution and is a sufficiently small perturbation matrix. Our results are stated in the following theorem. Let K=1, C=2 and λ_W_2>0. There exist α>0 and E>0, such that for 0 < λ_H < α and 0<<E, along the gradient flow stated in (<ref>) associated with the graph = 𝔼 +, we have that: (1) Tr(_W(_t)) decreases, and (2) Tr(_B(_t)) increases. Accordingly, _1(_t) decreases. The proof is presented in Appendix <ref>. The importance of the theorem comes from showing that even graphs that do not satisfy condition C (in the context of the analysis: perturbations around 𝔼) exhibit reduction in the within-class covariance and increase in the between-class covariance of the features. This implies a reduction of NC1 metrics (to some extent), which is aligned with the empirical results in Section <ref>. § FEATURE SEPARATION ACROSS LAYERS DURING INFERENCE Till now, we have analyzed the feature evolution of the deepest GNN layer during training. In this section, we use these well-trained GNNs to classify nodes in unseen SSBM graphs and explore the depthwise evolution of features. In essence, we take an NC perspective on characterizing the weights of these well-trained networks that facilitate good generalization. To this end, we present empirical results demonstrating a gradual decrease of NC1 metrics along the network's depth. The observations hold a resemblance to the case with plain DNNs (shown empirically in <cit.> and more recently in <cit.>, and theoretically in <cit.>). To gain insights into this depthwise behavior we also compare it with the behavior of spectral clustering methods along their projected power iterations. §.§ Experiments Setup. We consider the 32-layered networks ψ_Θ^, ψ_Θ^' which have been designed and trained as per the setup in section <ref> and have reached TPT. These networks are now tested on a dataset of K=100 unseen random SSBM graphs with C=2, N=1000, p=0.025, q=0.0017. Additionally, we perform spectral clustering using projected power iterations on the Normalized Laplacian (NL) and Bethe-Hessian (BH) matrices <cit.> for each of the test graphs. The motivation behind this approach is to obtain an approximation of the Fiedler vector of NL/BH that sheds light on the hidden community structure <cit.>. Formally, for a test graph = (, ), the NL and BH matrices are given by: NL() = - ^-1/2^-1/2, BH(, r) = (r^2 - 1) - r + , where r ∈ is the BH scaling factor. Now, by treating to be either NL or BH matrix, a projected power iteration to estimate the second largest eigenvector of = - is given by: ^(l) = ^(l-1), where ^(l-1) = ^(l-1) - ⟨^(l-1), ⟩/^(l-1) - ⟨^(l-1), ⟩_2, with the vector ∈^N denoting the largest eigenvector of . Thus, we start with a random normal vector ^0 ∈^N and iteratively compute the feature vector ^(l)∈^N, which represents the 1-D feature for each node after l iterations. §.§ Towards understanding depthwise behavior From Figure <ref>, we can observe that the rate of decrease in NC1 metrics is much higher in ψ_Θ^ and ψ_Θ^' (avg test overlap =1) when compared to the baseline spectral approaches (avg test overlap NL=0.04, BH=0.15) with random normal feature initialization. For ψ_Θ^ and ψ_Θ^', the NC1 metrics and traces of covariance matrices are tracked after each of the components of a layer: graph operator, ReLU and instance normalization. For spectral methods, the components are: the operator and the normalization. Interestingly, this rate seems to be relatively higher in ψ_Θ^' than in ψ_Θ^, and the variance of metrics tends to reduce significantly across all the test graphs after a certain depth in ψ_Θ^' and ψ_Θ^. Intuitively, the presence of _1 in ψ_Θ^ seems to delay this reduction across layers. On the other hand, owing to the non-parametric nature of the spectral approaches, observe that the ratios Tr(_B(^(l)))/Tr(_B(^(l-1))), Tr(_W(^(l)))/Tr(_W(^(l-1))) tend to be constant throughout all iterations. However, the GNNs behave differently as Tr(_B(^(l)))/Tr(_B(^(l-1))), Tr(_W(^(l)))/Tr(_W(^(l-1))) tend to decrease across depth (Figure <ref>). For a better understanding of this phenomenon, we consider the case of C=2 (without loss of generality) and assume that the (l-1)^th-layer features ^(l-1) of nodes belonging to class c=1,2 are drawn from distributions _1, _2 respectively. We do not make any assumptions on the nature of the distributions and simply consider _1^(l-1), _2^(l-1)∈^d_l-1 and _1^(l-1), _2^(l-1)∈^d_l-1× d_l-1 as their mean vectors and covariance matrices, respectively. In the following theorem, we present bounds on the ratio of traces of feature covariance matrices after the graph operator is applied. Let C=2, λ_i(·), λ_-i(·) indicate the i^th largest and smallest eigenvalue of a matrix, β_1 = p-q/p+q, β_2 = p/n(p+q), β_3 = p^2 + q^2/n(p+q)^2, and denote _W = _1^*(l)⊤_1^*(l) + β_2[ _2^*(l)⊤_1^*(l) + _1^*(l)⊤_2^*(l)] + β_3_2^*(l)⊤_2^*(l), _B = ( _1^*(l) + β_1_2^*(l))^⊤( _1^*(l) + β_1_2^*(l)). Then, the ratios of traces Tr(_B(^(l)))/Tr(_B(^(l-1))), Tr(_W(^(l)))/Tr(_W(^(l-1))) for layer l ∈{2, ⋯, L} of a network ψ_Θ^ are bounded as follows: ∑_i=1^d_l-1λ_-i( _B(^(l-1)))λ_i(_B) /∑_i=1^d_l-1λ_i( _B(^(l-1)))≤Tr(_B(^(l)))/Tr(_B(^(l-1)))≤∑_i=1^d_l-1λ_i( _B(^(l-1)))λ_i(_B) /∑_i=1^d_l-1λ_i( _B(^(l-1))), ∑_i=1^d_l-1λ_-i( _W(^(l-1)))λ_i(_W) /∑_i=1^d_l-1λ_i( _W(^(l-1)))≤Tr(_W(^(l)))/Tr(_W(^(l-1)))≤∑_i=1^d_l-1λ_i( _W(^(l-1)))λ_i(_W) /∑_i=1^d_l-1λ_i( _W(^(l-1))). The proof is presented in Appendix <ref>. To understand the implications of this result, first observe that by setting _1^* = and modifying _W = β_3_2^*(l)⊤_2^*(l), _B = β_1^2_2^*(l)⊤_2^*(l), we can obtain a similar bound formulation for ψ_Θ^'. To this end, as _W,_B depend on the spectrum of _2^*(l), the ratios Tr(_B(^(l)))/Tr(_B(^(l-1))), Tr(_W(^(l)))/Tr(_W(^(l-1))) are highly dependent on β_1, β_3. Notice that since _1^*(l)⊤_1^*(l) in _W is not scaled by any factor that is inversely dependent on n, it tends to act as a spectrum controlling mechanism and the reduction in within-class variability of features in ψ_Θ^ is relatively slow when compared to ψ_Θ^'. Thus, justifying the empirical behavior that we observed in subplots <ref> and <ref> in Figure <ref>. § CONCLUSION In this work, we studied the feature evolution in GNNs for inductive node classification tasks. Adopting a Neural Collapse (NC) perspective, we analyzed both empirically and theoretically the within- and between-class variability of features along the training epochs and along the layers during inference. We showed that a partial decrease in within-class variability (and NC1 metrics) is present in the GNNs' deepest features and provided theory that indicates that greater collapse is not expected when training GNNs on practical graphs (as it requires strict structural conditions). We also showed a depthwise decrease in variability metrics, which resembles the case with plain DNNs. Especially, by leveraging the analogy of feature transformation across layers in GNNs with spectral clustering along projected power iterations, we provided insights into this GNN behavior and distinctions between two GNN architectures. Interestingly, the structural conditions on graphs for exact collapse, which we rigorously established in this paper, are aligned with those that have been empirically hypothesized to facilitate GNNs learning in <cit.> (outside the context of NC). As a direction for future research, one may try to use this connection to link NC behavior with the generalization performance of GNNs. The authors would like to thank Jonathan Niles-Weed, Soledad Villar, Teresa Huang, Zhengdao Chen, and Lei Chen for informative discussions and feedback. The authors acknowledge the NYU High Performance Computing services for providing the computing resources to run the experiments reported in this manuscript. This work is partially supported by NSF DMS 2134216, NSF CAREER CIF 1845360, NSF IIS 1901091, and the Alfred P Sloan Foundation. plainnat § ADDITIONAL NEURAL COLLAPSE METRICS In this appendix, we define additional NC metrics pertaining to NC1-3 for our problem setup. ∙ Variability collapse in neighborhood-aggregated features : We track the within- and between-class variability of the “neighborhood-aggregated” features matrix by defining the covariance matrices _W(), _B() as: _W() := 1/Cn∑_c=1^C∑_i=1^n ( ^_c, i - ^_c )( ^_c, i - ^_c )^⊤ _B() := 1/C∑_c=1^C ( ^_c - ^_G )( ^_c - ^_G )^⊤ To this end, we define the _1(), _1() metrics as follows: _1() = 1/CTr(_W()_B^†()), _1() = Tr(_W())/Tr(_B()). Their primary purpose is to track the within-class variability of neighborhood aggregated features relative to their between-class variability, both now dependent on the topological structure of graph . The motivation to track these features arises from the P2 condition in the proof of theorem 3.1 in Appendix B. Essentially, this condition states that the neighborhood aggregated features should collapse to their respective class means for the minimizer to satisfy NC. ∙ SNR for variance collapse: We track the following `Signal-to-Noise' ratios pertaining to variability collapse of and : SNR(_1) := _1(⊗_n^⊤)_F/_1( - ⊗_n^⊤)_F SNR(^_1) := _2(^⊗_n^⊤)_F/_2( - ^⊗_n^⊤)_F These SNR metrics provide an alternate perspective for us to empirically analyze the desirability of variance collapse. Here := [ _1 ⋯ _C ]∈^d_l × C and ^ := [ _1^ ⋯ _C^ ]∈^d_l × C are the class-mean matrices without and with neighborhood aggregation respectively. ∙ Convergence of weights to a simplex ETF: To track the convergence of the weights _1, _2 to a simplex ETF structure, we define ^ETF_2(_1), ^ETF_2(_2) as: ^ETF_2(_1) := _1_1^⊤/_1_1^⊤_F - 1/√(C-1)(_C - 1/C_C_C^⊤) _F ^ETF_2(_2) := _2_2^⊤/_2_2^⊤_F - 1/√(C-1)(_C - 1/C_C_C^⊤) _F ∙ Convergence of weights to an Orthogonal Frame (OF): To track the convergence of the weights _1, _2 to an orthogonal frame structure, we define ^OF_2(_1), ^OF_2(_2) as: ^OF_2(_1) := _1_1^⊤/_1_1^⊤_F - _C/√(C)_F ^OF_2(_2) := _2_2^⊤/_2_2^⊤_F - _C/√(C)_F ∙ Convergence of features to a simplex ETF: To track the convergence of the features , to a simplex ETF structure, we define ^ETF_2(), ^ETF_2() as: ^ETF_2() := ^⊤/^⊤_F - 1/√(C-1)(_C - 1/C_C_C^⊤) _F ^ETF_2() := ^⊤^/^⊤^_F - 1/√(C-1)(_C - 1/C_C_C^⊤) _F where := [ _1 - _G ⋯ _C - _G ]∈^d_l × C and ^ := [ _1^ - _G^ ⋯ _C^ - _G^ ]∈^d_l × C are the re-centered class-means without and with neighborhood aggregation respectively. ∙ Convergence of features to an OF: To track the convergence of the features , to an OF structure, we define ^OF_2(), ^OF_2() as: ^OF_2() := ^⊤/^⊤_F - _C/√(C)_F ^OF_2() := ^⊤^/^⊤^_F - _C/√(C)_F ∙ Generic alignment of weights and features: To track the alignment of _1 with its dual , we define _3(_1, ) as: _3(_1, ) := _1/_1_F - ^⊤/_F_F Similarly, to track the alignment of _2 with its dual , we define _3(_2, ) as: _3(_2, ) := _2/_2_F - ^⊤/^_F_F ∙ Alignment of weights and features with respect to simplex ETF: To track the alignment of _1 and its dual with respect to a simplex ETF, we define ^ETF_3(_1, ) as: ^ETF_3(_1, ) := _1/_1_F - 1/√(C-1)(_C - 1/C_C_C^⊤) _F Similarly, we track the alignment of _2 and its dual with respect to a simplex ETF using: ^ETF_3(_2, ) := _2^/_2^_F - 1/√(C-1)(_C - 1/C_C_C^⊤) _F ∙ Alignment of weights and features with respect to OF: To track the alignment of _1 and its dual with respect to an OF, we define ^OF_3(_1, ) as: ^OF_3(_1, ) := _1/_1_F - _C/√(C)_F Similarly, we track the alignment of _2 and its dual with respect to an OF using: ^OF_3(_2, ) := _2^/_2^_F - _C/√(C)_F § PROOF OF THEOREM <REF> In this appendix, we present the proof for theorem <ref> by analyzing the sufficiency and necessity conditions of the graph structure given by: (s_c1,1, ⋯, s_cC,1) = ⋯ = (s_c1, n, ⋯, s_cC, n ), ∀ c ∈ [C],C where the fraction of neighbors of node v_c,i that belong to class c' as s_cc',i = |𝒩_c'(v_c,i)|/|𝒩(v_c,i)|. Proof sketch: Our aim is to identify the structural properties of a graph (especially ), such that the features , which exhibit neural collapse are indeed the minimizers of the risk. First, we obtain a lower bound for the risk using Jensen's inequality and show that, for a `collapsed' to be the minimizer, it is sufficient if the graph satisfies condition . However, the inequality is applied on a convex function of {_c,i} (standard basis vectors) that is not strictly convex, and so, this analysis does not imply necessity. Thus, we show the necessity of condition C for a `collapsed' to be a minimizer of the risk by analyzing the optimality conditions of the stationary points. Additional details of the sketch for the `necessity' argument are also presented. Sufficiency: We begin by revisiting the risk ^' for K=1. For simplicity, we drop the superscript ', subscript k, and treat the unconstrained features of the corresponding graph = (, ) as , and denote = ^-1. The risk (_2, ) is now given by: (_2, ) := 1/2N_2 - _F^2 + λ_/2_F^2 + λ__2/2_2_F^2 =:(_2, ) + λ__2/2_2_F^2. Now, by denoting _c,i∈^N as the one-hot vector associated with the index of the feature column _c,i (among the N feature columns), we lower bound (_2, ) as follows[When K>1, observe that (_2, {}_k=1^K ) = ∑_k=1^K (_2, _k) + λ__2/2_2_F^2. Thus, each of the (_2, _k) terms can be lower-bounded independently, resulting in a lower-bound for (_2, {}_k=1^K ) itself.]: (_2, ) = 1/2N_2 - _F^2 + λ_H/2_F^2 = 1/2N∑_c=1^C∑_i=1^n_2_c,i - _c_F^2 + λ_H/2∑_c=1^C∑_i=1^n_c,i_2^2 = 1/2N∑_c=1^Cn/n∑_i=1^n_2_c,i - _c_F^2 + λ_H/2∑_c=1^Cn/n∑_i=1^n_c,i_2^2 ≥1/2N∑_c=1^Cn_21/n∑_i=1^n_c,i - _c_F^2 + λ_H/2∑_c=1^Cn1/n∑_i=1^n_c,i_2^2 = 1/2N∑_c=1^Cn_21/n∑_i=1^n^_c,i - _c_F^2 + λ_H/2∑_c=1^Cn1/n∑_i=1^n_c,i_2^2. Note that Jensen's inequality, which we used above, is applied on a convex function of _c,i's that is not strictly convex. Therefore, the lower-bound in equation <ref> can be attained despite having different _c,i's, e.g., when the properties P1 and P2, which are stated below, hold ∀ c ∈ [C]: * P1: _c, 1 = …=_c, n = _c * P2: _c,1^ = ⋯ = _c,n^ = _c^ The first property P1 indicates zero intra-class variability of , i.e., _W() →, and the second property P2 indicates zero intra-class variability of i.e, _W() →. Recall condition C: * C: (s_c1,1, ⋯, s_cC,1) = ⋯ = (s_c1, n, ⋯, s_cC, n ) = (s_c1, ⋯, s_cC ), where (s_c1, ⋯, s_cC ) (shared by any node i ∈ [n] in class c) represents any suitable tuple of the ratio of neighbors per class[Note that the tuple can be different for nodes belonging to different classes, but must be the same for nodes within the same class.]. We just need to show that if C is satisfied then with both P1 and P2 exists. Equivalently, we can assume P1 (which is in the “feasible set” of the optimization) and show that then C implies P2. Indeed, in this case _c,i^ = ∑_𝒩_c(v_c,i)_c,j + ∑_𝒩_c' c(v_c,i)_c',j/|𝒩(v_c,i)| = ∑_𝒩_c(v_c,i)_c + ∑_𝒩_c' c(v_c,i)_c'/|𝒩(v_c,i)| = ∑_c'=1^C s_cc'_c'. Therefore P2 holds: _c,1^ = ⋯ = _c,n^ = _c^. Accordingly, C is sufficient for having that obeys P1 and P2, and thus minimizes the risk. Sketch for `Necessity': The goal of this analysis is to nullify the possibility of having a minimizer that exhibits collapse (i.e., which satisfies P1) for a graph that does not satisfy condition C. We do so by analyzing the optimality conditions of the stationary points satisfying P1, and obtaining conditions on the matrix. Specifically, we obtain a system of linear equations that is shared by all n nodes within a class, which in turn leads to condition . Thus, proving its necessity. Necessity: Analysing the necessity of condition is relatively more complicated than the sufficiency case. Nonetheless, we prove the necessity by considering K=1, and by leveraging the idea of a tight-convex alternative for (_2, ) as been used in <cit.> for the conventional UFMs[The K=1 setting allows us to employ analysis strategies based on a tight convex formulation, which is not applicable for K>1 settings. Specifically, we cannot generalize the problem stated for =_2 to multiple {_k} as they share the same _2.]. Formally, assuming d_L-1≥ C, we minimize: () := 1/2N-_F^2 + λ_Z_*, where λ_Z=√(λ_H λ_W_2), ∈ℝ^C × N, and ·_* denotes the nuclear norm. Namely, if _2 and minimize the former, then = _2 minimizes the latter, which follows from: √(λ_H λ_W_2)_* = min__2, s.t. _2= ( λ_W_2/2_2_F^2 + λ_H/2_F^2 ). We start with providing necessity analysis for the case λ_Z = 0. Later, we generalize this analysis to address the case λ_Z > 0. Analysis for λ_Z = 0: When λ_Z = 0, observe that = = [_1,…,_C] ⊗1_n^⊤ = _C ⊗1_n^⊤ gives us the minimum value for (). Now, note that NC1 implies =⊗1_n^⊤=⊗1_n^⊤ (where ∈ℝ^d_L-1× C and =_2 ∈ℝ^C × C). Thus, by leveraging this Kronecker structure of =⊗1_n^⊤ =[_1,…,_C] ⊗1_n^⊤ and =[_1,…,_C] ⊗1_n^⊤, we formulate: [ _1 ⊗1_n^⊤ ⋯ _C ⊗1_n^⊤ ]ccc|c|ccc[margin] _1,1^1 ⋯ _1,1^n ⋯ _C,1^1 ⋯ _C,1^n ⋮ ⋮ ⋮ ⋱ ⋮ ⋮ ⋮ _1,C^1 ⋯ _1,C^n ⋯ _C,C^1 ⋯ _C,C^n = [ _1 ⊗1_n^⊤ ⋯ _C ⊗1_n^⊤ ] where _c,c'^i ∈^n represents the slice of columns in corresponding to node i ∈ [n] belonging to class c ∈ [C] and forming edges with nodes from class c' ∈ [C]. Additionally, since is the normalized adjacency matrix, we have s_cc', i = _n^⊤_c,c'^i ∈, which represents the sum of elements in _c,c'^i. This gives us: ∑_c'=1^C s_cc', i = 1, because = ^-1 is the degree-normalized adjacency matrix. Now, multiplying the matrix =⊗1_n^⊤ with the first column of gives us: s_11, 1_1 + ⋯ + s_1C, 1_C = _1 Similarly, due to the block structure of ,, we get for all i ∈ [n] and c=1: s_11, i_1 + ⋯ + s_1C, i_C = _1 where s_11, i_1 + ⋯ + s_1C, i_C = _1 itself can be written as C linear equations (one for each of the vector components) as formulated below: [ z_1,1 ⋯ z_C,1; ⋮ ⋱ ⋮; z_1,C ⋯ z_C,C; ][ s_11, i; ⋮; s_1C, i ] = _1 Now, by treating {s_11, i, ⋯, s_1C, i} as the C unknowns which satisfy equation <ref>, observe that the solution to this linear system remains the same for all nodes belonging to class c=1. This implies: s_1c', 1 = ⋯ = s_1c', n, ∀ c' ∈ [C], The generalization of this result for all C classes essentially indicates that: (s_c1,1, ⋯, s_cC,1) = ⋯ = (s_c1, n, ⋯, s_cC, n ), ∀ c ∈ [C], which exactly represents condition C as stated above. Analysis for λ_Z > 0: When exhibits neural collapse, we have the optimality condition for the minimizer of equation <ref> based on the sub-differential of the nuclear norm as follows: 1/N ( - ) ^⊤ + λ_Z _Z _Z^⊤ = 0 1/N ( ( ⊗1_n^⊤ ) - _C ⊗1_n^⊤ ) ^⊤ + λ_Z _Z _Z^⊤⊗1/√(n)1_n^⊤ = 0. Where ∈^C × C is the matrix of “collapsed” columns of , and _Z∈^C × C, _Z∈^N × C represent the left and right singular vectors of . Additionally, since holds a “block” structure, we represent _Z^⊤ = _Z^⊤⊗1/√(n)1_n^⊤, where _Z∈^C × C. ∙ Matrix Quadratic Form: By considering -Nλ_Z _Z _Z^⊤⊗1/√(n)1_n^⊤ =, we get: ( - )^⊤ = , as our optimality condition. Analyzing this condition along the same lines as λ_Z = 0 is non-trivial due to the outer product of ^⊤. To address this complication, we leverage the fact that SSBM graphs tend to be regular with a high probability as N increases. Thus, by assuming that is symmetric, we obtain the following matrix quadratic form: ^2 - - = . As is rectangular, we can take the pseudo-inverse and obtain: ^2 - ^† - ^† = . Since ^†, ^†∈^N × N, we can treat as the variable matrix and leverage the results on quadratic matrix equations by <cit.>. Especially, we leverage theorem 3 in <cit.> and employ a generalized Schur decomposition technique to obtain a condition on (as shown in the following lemma). This condition on allows us to obtain a system of linear equations that are shared by all n nodes within a class (similar to the λ_Z = 0 case). Thus, establishing the necessity of condition C. Let = [ ; ^† ^† ], = [ ; ] = ∈^2N × 2N, and the generalized Schur decomposition of , be given by: ^⊤ = , ^⊤ = , where , are unitary and , are upper triangular. = [ _11 _12; _21 _22 ]∈^2N × 2N is a 2 × 2 block matrix with blocks of size N × N. Then satisfies: _11 = _21. The proof is relatively straightforward once we observe that satisfies: [ ; ] = [ ; ]. Now, the condition _11 = _21 is a direct consequence of theorem 3 in <cit.>, which leverages the QR decomposition of [ ; ] using as the orthogonal matrix. ∙ Kronecker structure of : To leverage the relationship between and as per Lemma <ref>, a closer look at is required. Observe that: = [ ; ^† ^† ] = [ ^† ^†; ^† ^† ] = [ ^† ; ^† ][ ; ]. By expanding the Kronecker structures of , ,, we get: =[ ^† ; ^† ][ ⊗_n^⊤ ⊗_n^⊤; ⊗_n^⊤ _C ⊗_n^⊤ ] = [ ⋯ 0 ^†_1 ⋯ ^†_C; ^†_1 ⋯ ^†_C ^†_1 ⋯ ^†_C ]⊗_n^⊤ . Observe that the pseudo-inverse ^† can be represented by: ^† = (_Z⊗1/√(n)1_n)^†_Z _Z^⊤ which also holds a “block” structure (but with respect to rows, instead of columns): ^† = 1/√(n)[ _1^⊤; ⋮; _C^⊤ ]^†_Z _Z^⊤ = 1/√(n)[ _1^⊤^†_Z _Z^⊤; ⋮; _C^⊤^†_Z _Z^⊤ ]_N × C. Where _j ∈^C, ∀ j ∈ [C] and _j^⊤ is the j^th row of _Z∈^C × C. With this formulation, a matrix-vector product term in , for instance ^†_i, i ∈ [C], can be given as: ^†_i = 1/√(n)[ _1^⊤^†_Z _Z^⊤_i; ⋮; _C^⊤^†_Z _Z^⊤_i ]_N × 1 = (_Z^†_Z _Z^⊤_i) ⊗1/√(n)_n. By considering the following notational simplifications: _i = _Z^†_Z _Z^⊤_i _i = _Z^†_Z _Z^⊤_i _i = _Z^†_Z _Z^⊤_i, we can represent as: = [ ⋯ ^†_1 ⋯ ^†_C; ^†_1 ⋯ ^†_C ^†_1 ⋯ ^†_C ]⊗_n^⊤ = [ ⋯ _1 ⊗1/√(n)_n ⋯ _C ⊗1/√(n)_n; _1 ⊗1/√(n)_n ⋯ _C ⊗1/√(n)_n _1 ⊗1/√(n)_n ⋯ _1 ⊗1/√(n)_n ]⊗_n^⊤ = [ ⋯ _1 ⋯ _C; _1 ⋯ _C _1 ⋯ _1 ]_2C × 2C⊗1/√(n)_n ⊗_n^⊤. ∙ Schur decomposition of : For notational simplicity, let : = ⊗1/√(n)_n ⊗_n^⊤ Where: = [ ⋯ _1 ⋯ _C; _1 ⋯ _C _1 ⋯ _1 ]_2C × 2C. Since =, the diagonal entries of must equal the eigenvalues of . This condition is satisfied when =. This also simplifies the generalized Schur decomposition for square matrices , to the standard Schur decomposition of . Now, let the schur decomposition of ∈^2N × 2N, ∈^2C × 2C be given as: = ^⊤, = ^⊤. Here , ∈^2N × 2N and , ∈^2C × 2C. To find a relation between , , ,, we can leverage the Schur decomposition properties of Kronecker products <cit.> and obtain: = ⊗1/√(n)_n ⊗_n^⊤ = ⊗1/√(n)_n _n^⊤ ^⊤ = (^⊤) ⊗( ^⊤) = ( ⊗)( ⊗)( ^⊤⊗^⊤). Where , ∈^n × n are unitary and upper triangular respectively, and are the Schur decomposition factors of 1/√(n)_n _n^⊤∈^n × n. ∙ Linear systems: In matrix form, = ⊗ can be represented as: = c|c[margin] _11⊗ _12⊗ _21⊗ _22⊗_2N × 2N. Where _11, _12 , _21 ,_22∈^C × C. Now, observe that _11 = _21 (based on Lemma <ref>) can be reformulated as: (_11⊗) = ( _21⊗). Now, as per equation <ref>, we leverage the same line of analysis that we followed for the λ_Z = 0 case. For notational simplicity, we represent the unitary matrix ∈^n × n in column format as follows: = [ _1 ⋯ _n ], where _i ∈^n, i ∈ [n] are linearly independent vectors. Now, by multiplying the first row of _11⊗ and the first column of , we get: ccc[margin] (_11)_1,1[ _1 ⋯ _n ] ⋯ (_11)_1,C[ _1 ⋯ _n ]c[margin] a_1,1 ⋮ a_n,1 ⋮ a_(C-1)n+1,1 ⋮ a_Cn,1 = (_21)_1,1_1 . This translates to the following linear equation: a_1,1(_11)_1,1_1 + ⋯ + a_n,1(_11)_1,1_n + ⋯ + a_(C-1)n+1,1(_11)_1,C_1 + ⋯ + a_Cn,1(_11)_1,C_n = (_21)_1,1_1. Due to linear independence of vectors _i, i ∈ [n], we obtain the following n equations pertaining to the coefficients of _i: a_1,1(_11)_1,1 + ⋯ + a_(C-1)n+1,1(_11)_1,C = (_21)_1,1 a_2,1(_11)_1,1 + ⋯ + a_(C-1)n+2,1(_11)_1,C = 0 ⋮ a_n,1(_11)_1,1 + ⋯ + a_Cn,1(_11)_1,C = 0 . By adding all these equations, we get: s_11, 1(_11)_1,1 + ⋯ + s_1C,1(_11)_1,C = (_21)_1,1. By following the same approach for the other rows of _11⊗_1_n and the first column of , we get the following system of equations for node v_1,1: [ (_11)_1,1 ⋯ (_11)_1,C; ⋮ ⋱ ⋮; (_11)_C,1 ⋯ (_11)_C,C; ][ s_11, 1; ⋮; s_1C, 1 ] = [ (_21)_1,1; ⋮; (_21)_C,1 ]. The same line of analysis can be applied for all the rows of _11⊗_1_n and the second column of , to get the following system of equations for node v_1,2: [ (_11)_1,1 ⋯ (_11)_1,C; ⋮ ⋱ ⋮; (_11)_C,1 ⋯ (_11)_C,C; ][ s_11, 2; ⋮; s_1C, 2 ] = [ (_21)_1,1; ⋮; (_21)_C,1 ] Thus, it is straightforward that the systems of equations are the same for all n nodes belonging to class c=1, as the procedure of row and column multiplication remains the same. Thus, the C unknowns in these linear systems have the same solution for all n nodes belonging to class c=1. It is straightforward to extend this to any class c ∈ [C] and obtain: (s_c1,1, ⋯, s_cC,1) = ⋯ = (s_c1, n, ⋯, s_cC, n ), ∀ c ∈ [C], which exactly represents the condition as per the theorem. § PROOF OF THEOREM <REF> In this appendix, we derive the upper bound on the probability of sampling the desired neighborhood for condition C. Recall that to satisfy condition C, the requirement w.r.t is for (|𝒩_1(v_c,i)|/|𝒩(v_c,i)|, ⋯, |𝒩_C(v_c,i)|/|𝒩(v_c,i)|), ∀ i ∈ [n] to be the same for a given c ∈ [C]. To this end, we are primarily concerned with the probabilities of edges between nodes v_i, v_j, 1 ≤ i ≤ j ≤ N. Thus, as per preliminaries, the probability matrix can be given in block form as: = [ p_n_n^⊤ q_n_n^⊤ ⋯ q_n_n^⊤; ⋮ p_n_n^⊤ ⋯ q_n_n^⊤; ⋮ ⋱ ⋱ ⋮; ⋮ ⋯ ⋯ p_n_n^⊤; ]_N × N Where we are only concerned with the diagonal and upper triangular values[Due to symmetry, one can equivalently consider the lower triangular values and proceed with sums of columns instead of sums of rows.]. Now, for a pair of classes c, c' ∈ [C], we are concerned with the block probability matrix p_n_n^⊤ when c=c' and q_n_n^⊤ when c ≠ c' for sampling edges between nodes. Observe that sampling edges within a community based on diagonal block matrix p_n_n^⊤ is the same as sampling an Erdos-Renyi graph with edge probability p. Similarly, sampling edges between communities based on off-diagonal block matrix q_n_n^⊤ is the same as sampling a bipartite graph with edge probability q. Concentration of pairwise neighbor ratios: To begin with, consider the set of nodes belonging to class c ∈ [C] as Ω_c = {v_c,1, ⋯, v_c, n}. Now, to satisfy condition C, the fraction of neighbors of nodes v_c,i, v_c,j, i j ∈ [n] that belong to class c' ∈ [C] should be equal, i.e s_cc',i = s_cc',j. Formally, this leads to: |𝒩_c'(v_c,i)|/|𝒩(v_c,i)| = |𝒩_c'(v_c,j)|/|𝒩(v_c,j)||𝒩_c'(v_c,i)|/|𝒩_c'(v_c,j)| = |𝒩(v_c, i)|/|𝒩(v_c,j)| Without loss of generality, observe that |𝒩_c'(v_c,i)| is essentially the sum of n independent Bernoulli random variables γ_c, i^c', l, ∀ l ∈ [n] with (γ_c,i^c',l = 1) = q. This implies that 𝔼|𝒩_c'(v_c,i)| = nq. Now, we apply the Chernoff bound to obtain: ( | |𝒩_c'(v_c,i)| - nq | ≥δ nq ) ≤ 2e^-t nqδ^2 Where δ∈ [0, 1] and t > 0 is a constant. By choosing δ = √((r+1)ln n/tnq) for sufficiently large r > 0, N >> C, we get δ = O(1) as q = b ln N/N . Now, by taking a union bound over all the nodes in the class, we get: ( ∀ i ∈ [n], | |𝒩_c'(v_c,i)| - nq | ≥δ nq ) ≤ 2ne^-(r+1)ln n Thus, with a probability at-least 1-2n^-r, we get: |𝒩_c'(v_c,i)| = nq( 1 ± O(1) ) By applying the same line of argument to |𝒩_c'(v_c,j)| and assuming a sufficiently large value of n, we get with a probability at-least 1-2n^-r that: |𝒩_c'(v_c,i)|/|𝒩_c'(v_c,j)|→ 1 To this end, we assume that |𝒩_c'(v_c,i)| = |𝒩_c'(v_c,j)| , ∀ c ∈ [C], i j ∈ [n] with a high probability for the rest of the analysis. The consequence of this assumption is that all the nodes belonging to the same class have the same degree. However, it is not necessary that the graph itself is regular. Off-diagonal blocks: Without loss of generality, consider the set of nodes belonging to a pair of classes c c' ∈ [C] as Ω_c = {v_c,1, ⋯, v_c, n}, Ω_c' = {v_c', 1, ⋯, v_c', n} respectively. Now, we need to ensure that every node v_c,i∈Ω_c is connected to (say) exactly t_cc'∈, 0 ≤ t_cc'≤ n, nodes in Ω_c'. If E_c,i^c'(t_cc') indicates that such an event occurs for node v_c,i∈Ω_c with respect to Ω_c', we can formally represent it as the sum of n independent Bernoulli random variables γ_c, i^c', j, i ∈ [n], ∀ j ∈{1, …, n} with (γ_c,i^c',j = 1) = q sum to t_cc' as follows: (E_c,i^c'(t_cc')) = n t_cc'q^t_cc'(1-q)^n - t_cc'. Now, we are concerned with an intersection of events pertaining to all nodes in Ω_c, which ensures that each node has exactly t_cc' neighbors in Ω_c'. By considering the event E_c^c'(t_cc') = ⋂_i=1^n E_c,i^c'(t_cc') and leveraging the fact that edges are sampled independently, we obtain: (E_c^c'(t_cc')) = (⋂_i=1^n E_c,i^c'(t_cc') ) = [n t_cc'q^t_cc'(1-q)^n - t_cc']^n. Now, to account for all possible values of 0 ≤ t_cc'≤ n, we compute the probability of the union of events E_c^c' = ⋃_t_cc'=0^n E_c^c'(t_cc') as: (E_c^c') ≤∑_t_cc'=0^n [n t_cc'q^t_cc'(1-q)^n - t_cc']^n. Note that this result can be applied to any distinct pair of classes c c' ∈ [C] based on the characteristics of the SSBM. Since we have C 2 = C(C-1)/2 combinations of distinct communities, the probability of occurrence of all the corresponding events is given by: (⋂_c=1^C-1⋂_c'=c+1^CE_c^c') ≤∏_c=1^C-1∏_c'=c+1^C∑_t_cc'=0^n [n t_cc'q^t_cc'(1-q)^n - t_cc']^n. Diagonal blocks: Handling the exact event probabilities when c=c' ∈ [C] is not so straightforward due to symmetry constraints. To begin with, observe that the sum of n independent Bernoulli random variables γ_c, i^c, j, i ∈ [n], ∀ j ∈{1, …, n} with (γ_c,i^c,j = 1) = p sum to t_cc is given by: (E_c,i^c(t_cc)) = ( ∑_j=1^nγ_c,i^c,j = t_cc) = n t_ccp^t_cc(1-p)^n - t_cc Since (E_c,j^c(t_cc)) for j > i ∈ [n] is conditional on (E_c,i^c(t_cc)), the desired probability for the event E_c^c(t_cc) = ⋂_i=1^n E_c,i^c(t_cc) can be formulated as: (E_c^c(t_cc)) = (⋂_i=1^n E_c,i^c(t_cc)) = (E_c,1^c(t_cc))·(E_c,2^c(t_cc) | E_c,1^c(t_cc)) ⋯(E_c,n^c(t_cc) | ⋂_i=1^n-1 E_c,i^c(t_cc)) ≤[ n t_ccp^t_cc(1-p)^n - t_cc]^n Where the last inequality is based on the upper bound for conditional probabilities if the events were independent. Now, to account for all possible values of 0 ≤ t_cc≤ n, we compute the probability of the union of events E_c^c = ⋃_t_cc=0^n E_c^c(t_cc) as: (E_c^c) ≤∑_t_cc=0^n [n t_ccp^t_cc(1-p)^n - t_cc]^n Thus, we can estimate the probability of the intersection of all events pertaining to c,c' ∈ [C] as: (⋂_c=1^C⋂_c'=c^CE_c^c') ≤( ∏_c=1^C-1∏_c'=c+1^C∑_t_cc'=0^n [n t_cc'q^t_cc'(1-q)^n - t_cc']^n) · ( ∏_c=1^C ∑_t_cc=0^n [n t_ccp^t_cc(1-p)^n - t_cc]^n ) Finally, by ignoring the extreme case where t_cc' = 0, ∀ c,c' ∈ [C], we get the strict inequality of: (⋂_c=1^C⋂_c'=c^CE_c^c') < ( ∑_t_cc'=0^n [n t_cc'q^t_cc'(1-q)^n - t_cc']^n)^C(C-1)/2· ( ∑_t_cc=0^n [n t_ccp^t_cc(1-p)^n - t_cc]^n )^C §.§ Illustration with exhaustive combinations for a small graph A key assumption in our theoretical analysis has been N>>C, which allowed us to assume that nodes belonging to the same class have the same degree. However, for an intuitive understanding of condition C, let us consider an extremely simple graph with N=4, C=2. This leads to the following adjacency matrix formulation: = [ γ_11 γ_12 γ_13 γ_14; γ_12 γ_22 γ_23 γ_24; γ_13 γ_23 γ_33 γ_34; γ_14 γ_24 γ_34 γ_44 ] Where γ_ij represents a Bernoulli random variable depending on p, q. We defer assigning values to p,q until the end as we are interested in the `realizations' of that satisfy condition C. Observe that there are only 10 random variables in due to symmetry (4 on the diagonal and 6 on the upper-triangular part). Since each can either take a value of 0, 1, there are 1024 unique realizations of . To this end, condition C can be represented as: γ_11 + γ_12/γ_11 + γ_12 + γ_13 + γ_14 = γ_12 + γ_22/γ_12 + γ_22 + γ_23 + γ_24 and γ_13 + γ_14/γ_11 + γ_12 + γ_13 + γ_14 = γ_23 + γ_24/γ_12 + γ_22 + γ_23 + γ_24 and γ_13 + γ_23/γ_13 + γ_23 + γ_33 + γ_34 = γ_14 + γ_24/γ_14 + γ_24 + γ_34 + γ_44 and γ_33 + γ_34/γ_13 + γ_23 + γ_33 + γ_34 = γ_34 + γ_44/γ_14 + γ_24 + γ_34 + γ_44. Based on our simulations for graphs with self-edges, less than 1/10 of the 1024 realizations satisfy this property. A few are illustrated below: = [ 1 1 0 1; 1 1 1 0; 0 1 1 0; 1 0 0 1 ], = [ 1 1 0 1; 1 1 1 0; 0 1 0 1; 1 0 1 0 ] = [ 1 0 1 0; 0 1 0 1; 1 0 1 0; 0 1 0 1 ], = [ 1 0 0 1; 0 1 1 0; 0 1 0 1; 1 0 1 0 ] By considering p=0.2, q=0.05, the probability of sampling such is ≈ 0.046. Interestingly, when p=0.05, q=0.2, the probability increases to ≈ 0.06. Now, by increasing the density of edges in the graph, i.e., when p=0.4,q=0.1 and p=0.1,q=0.4, we get probability values ≈ 0.166, 0.178 respectively. Now, let us consider the case where N=8, C=2. In this case, we have 36 Bernoulli random variables γ_ij, i ≤ j ∈{1,⋯,8}. The number of possible values for turns out to be 2^36 = 68,719,476,736, for which the brute force approach to validate condition C is not efficient. To this end, we follow a simple Monte-Carlo approach and draw 1,000,000 random graphs from SSBM(N=8,C=2,p=0.5,q=0.2). We observed that only ≈ 800 graphs out of 1,000,000 satisfied condition C, i.e., a probability of ≈ 0.0008. A few are illustrated below: = [ 0 1 1 1 0 0 0 0; 1 1 0 1 0 0 0 0; 1 0 1 1 0 0 0 0; 1 1 1 0 0 0 0 0; 0 0 0 0 0 1 1 0; 0 0 0 0 1 1 1 1; 0 0 0 0 1 1 1 1; 0 0 0 0 0 1 1 0; ], = [ 1 1 1 1 0 1 0 0; 1 1 1 1 0 0 1 0; 1 1 1 1 1 0 0 0; 1 1 1 1 0 0 0 1; 0 0 1 0 0 1 1 1; 1 0 0 0 1 1 0 1; 0 1 0 0 1 0 1 1; 0 0 0 1 1 1 1 0; ] = [ 0 1 1 0 0 1 0 1; 1 1 0 1 0 1 1 1; 1 0 1 1 1 1 1 0; 0 1 1 1 1 0 1 1; 0 0 1 1 0 1 0 1; 1 1 1 0 1 0 1 1; 0 1 1 1 0 1 1 1; 1 1 0 1 1 1 1 0; ], = [ 1 1 1 0 1 1 0 0; 1 0 1 1 1 0 1 0; 1 1 0 1 0 1 0 1; 0 1 1 1 0 0 1 1; 1 1 0 0 1 1 1 0; 1 0 1 0 1 0 1 1; 0 1 0 1 1 1 0 1; 0 0 1 1 0 1 1 1; ] Thus, the takeaway is that, even for small-medium scale homophilic and heterophilic SSBM graphs, random graphs that satisfy condition C are rarely sampled. § PROOF OF THEOREM <REF> In this appendix, we analyse the gradient flow stated in (<ref>) and establish the results stated in Theorem <ref>. Our analysis is inspired by the one in <cit.>, but significantly differs from it, as we need to overcome the complexity introduced by the graph structure matrix. Based on the setup of gUFM for GNN ψ^', we analyze the risk ^' given as follows: ^'(_2, {_k}_k=1^K) := 1/K∑_k=1^K ( 1/2N_2_k_k - _F^2 + λ_H_k/2_k_2^2 ) + λ_W_2/2_2_2^2 By taking the derivatives of ^' with respect to (_2, _k), we get: ∂^'/∂_2 = 1/KN∑_k=1^K (_2_k_k - )(_k_k)^⊤ + λ_W_2_2 ∂^'/∂_k = 1/KN[_2^⊤(_2_k_k - )_k^⊤] + 1/Kλ_H_k_k Now, by setting ∂^'/∂_2 = 0 we get the closed form representation of _2 in terms of . ∂^'/∂_2 = 1/KN∑_k=1^K (_2_k_k - )(_k_k)^⊤ + λ_W_2_2 = 0 1/K∑_k=1^K ( _2_k_k_k^⊤_k^⊤ - _k^⊤_k^⊤) + λ_W_2N_2 = 0 _2[ 1/K∑_k=1^K _k_k_k^⊤_k^⊤ + λ_W_2N] = 1/K∑_k=1^K _k^⊤_k^⊤ Thus, the ideal value of _2^* is given by: _2^* = [ 1/K∑_k=1^K _k^⊤_k^⊤][ 1/K∑_k=1^K _k_k_k^⊤_k^⊤ + λ_W_2N]^-1 Now, under the assumption that K=1, C=2, we drop the subscript for _k, _k to get: _2^* = ( ^⊤^⊤)( ^⊤^⊤ + λ_W_2N)^-1 To this end, we return to our risk minimization formulation for a single graph with these optimal values as follows: ^'() = 1/2N_2^* - _F^2 + λ_W_2/2_2^*_2^2 + λ_H/2_2^2 §.§ as a perturbation of 𝔼 Since we are dealing with SSBM graphs, we can formulate as the perturbed version of its expected value. Formally, = 𝔼 + where 𝔼∈^N × N is the expected normalized adjacency matrix and ∈^N × N is the perturbation matrix. 𝔼 can be written in block matrix form as: 𝔼 = 1/np + nq[ p_n_n^⊤ q_n_n^⊤; q_n_n^⊤ p_n_n^⊤; ]_N × N Where the 𝔼 has eigenvalues 1, p - q/p + q associated with eigenvectors 1/√(N) and = 1/√(N)[ _n; -_n ]_N respectively. Thus, we can represent 𝔼 using its spectral information as follows: 𝔼 = 2/N(p + q)( p + q/2_N_N^⊤ + p - q/2^⊤) = [ _N ][ α_1 0; 0 α_2 ][ _N ]^⊤ = [ _N ][ √(α_1) 0; 0 √(α_2) ][ √(α_1) 0; 0 √(α_2) ]^⊤[ _N ]^⊤ = ^⊤ Where α_1 = 1/N, α_2 = p-q/(p+q)N, and = [ _N ][ √(α_1) 0; 0 √(α_2) ]∈^N × 2 is the factor matrix. §.§ Preliminary results §.§.§ Relating _B(), _B() Since C=2 in our analysis and _G = _1 + _2/2 due to balanced communities, note that: _B() = 1/2∑_c=1^2 ( _c - _G )( _c - _G )^⊤ = 1/2( ( _1 - _G )( _1 - _G )^⊤ + ( _2 - _G )( _2 - _G )^⊤) = 1/4( _1 - _2 )( _1 - _2 )^⊤ = 1/4(_1_1^⊤ + _2_2^⊤ - _1_2^⊤ - _2_1^⊤) = 1/4(2_B() - _1_2^⊤ - _2_1^⊤) Thus, we get the following relation between _B(), _B(): 2_B() - 4_B() = _1_2^⊤ + _2_1^⊤ Additionally, we can extend this result to the following: 2_B() - 4_B() = _1_2^⊤ + _2_1^⊤ = (2_G - _2 ) _2^⊤ + (2_G - _1 ) _1^⊤ = 2_G ( _1^⊤ + _2^⊤) - _1_1^⊤ - _2_2^⊤ = 4_G() - 2_B() Where _G = _G_G^⊤. This gives us: _B() - _G() = _B() §.§.§ Expanding ^⊤ Based on the perturbed representation of = 𝔼 +, we can modify ^⊤ and ^⊤^⊤ as: ^⊤ = [ 𝔼]^⊤ + ^⊤ = [ ][ ]^⊤ + ^⊤ ^⊤^⊤ = [ 𝔼]^⊤^⊤ + ^⊤^⊤ = [ 𝔼]^⊤ + ^⊤^⊤ = [ ][ ]^⊤ + ^⊤^⊤ Observe that can be broken down in terms of _c,i, c ∈ [C], i ∈ [n] as follows: = [ _1,1 ⋯ _1, n _2, 1 ⋯ _2, n ]_d_L-1× N[ _N ]_N × 2[ √(α_1) 0; 0 √(α_2) ]_2 × 2 = [ √(α_1)∑_c=1^2∑_i=1^n_c,i √(α_2)( ∑_i=1^n_1,i - ∑_i=1^n_2,i) ] = [ 2n√(α_1)_G n√(α_2)( _1 - _2) ] This leads to the following expansion of [ ][ ]^⊤ : [ ][ ]^⊤ = [ 2n√(α_1)_G n√(α_2)( _1 - _2) ][ 2n√(α_1)_G^⊤; n√(α_2)( _1 - _2)^⊤ ] = 4n^2α_1 _G _G^⊤ + n^2α_2 ( _1 - _2)( _1 - _2)^⊤ = 4n^2α_1 _G _G^⊤ + 4n^2α_2 ( _1 - _G )( _1 - _G )^⊤ Since _G = _1 + _2/2 (due to balanced classes). This also implies that: [ ][ ]^⊤ = 4n^2α_1 _G _G^⊤ + 4n^2α_2 ( _2 - _G )( _2 - _G )^⊤ Thus, by taking the average of values in equation <ref>, <ref>, we get: [ ][ ]^⊤ = 4n^2α_1 _G() + 4n^2α_2 _B() = 4n^2[ 1/N_G() + (p-q)/(p+q)N_B() ] = 2n[ _B() - _B() + (p-q)/(p+q)_B() ] = 2n[ _B() - 2q/(p+q)_B() ] Finally, ^⊤ can be simplified to: ^⊤ = 2n[ _B() - 2q/(p+q)_B() ] + Δ_1 Where Δ_1 = ^⊤ corresponds to the first order perturbation term. Similarly, ^⊤^⊤ = 2n[ _B() - 2q/(p+q)_B() ] + Δ_1^⊤ §.§.§ Expanding ^⊤^⊤ Expanding ^⊤ can be done along the same lines: ^⊤ = [ 𝔼 + ][ 𝔼 + ]^⊤ = [ 𝔼]^2 + ^⊤ + [ 𝔼] + [ 𝔼] ^⊤ [𝔼]^2 = [ _N ][ α_1 0; 0 α_2 ][ _N ]^⊤[ _N ][ α_1 0; 0 α_2 ][ _N ]^⊤ = [ _N ][ α_1 0; 0 α_2 ][ Nα_1 0; 0 Nα_2 ][ _N ]^⊤ = [ √(α_1N) 0; 0 √(α_2N) ][ √(α_1N) 0; 0 √(α_2N) ]^⊤^⊤ Based on the formulations above, [𝔼]^2^⊤ can be given by: [𝔼]^2^⊤ = [ [ √(α_1N) 0; 0 √(α_2N) ]][ [ √(α_1N) 0; 0 √(α_2N) ]]^⊤ = [ 2nα_1√(N)_G nα_2√(N)( _1 - _2) ][ 2nα_1√(N)_G^⊤; nα_2√(N)( _1 - _2)^⊤ ] = 4n^2α_1^2N _G _G^⊤ + n^2α_2^2N ( _1 - _2)( _1 - _2)^⊤ = 4n^2α_1^2N _G _G^⊤ + 4n^2α_2^2N ( _1 - _G )( _1 - _G )^⊤ Since _G = _1 + _2/2 (due to balanced classes). This also implies that: [𝔼]^2^⊤ = 4n^2α_1^2N _G _G^⊤ + 4n^2α_2^2N ( _2 - _G )( _2 - _G )^⊤ Thus, based on taking the average of values in equation <ref>, <ref>, we get: [𝔼]^2^⊤ = 4n^2α_1^2N _G() + 4n^2α_2^2N _B() = 8n^3[ 1/N^2_G() + (p-q)^2/N^2(p+q)^2_B() ] = 2n[ _B() - 4pq/(p+q)^2_B() ] Finally, ^⊤^⊤ can be simplified to: ^⊤^⊤ = 2n[ _B() - 4pq/(p+q)^2_B() ] + Δ_2 Where Δ_2 = [ ^⊤ + [ 𝔼] + [ 𝔼] ^⊤ ]^⊤ is a symmetric matrix corresponding to the first and second order perturbation terms. §.§.§ Expanding ^⊤^⊤, ^⊤ We follow similar line of expansions for ^⊤^⊤ to get: ^⊤^⊤ = [𝔼]^⊤ + ^⊤^⊤ = (_2 ⊗_n^⊤)[𝔼]^⊤ + ^⊤^⊤ = (_2 ⊗_n^⊤)1/np+nq[ np_1^⊤ + nq_2^⊤; ⋮; nq_1^⊤ + np_2^⊤; ⋮ ] + ^⊤^⊤ = n/p+q[ p_1^⊤ + q_2^⊤; q_1^⊤ + p_2^⊤ ] + Δ_3 Where Δ_3 = ^⊤^⊤ is the first order perturbation term. Next, observe that: ^⊤ = (_2 ⊗_n^⊤)^⊤ = n^⊤ §.§.§ Expanding ^⊤^⊤^⊤ ^⊤^⊤^⊤ = [ 𝔼 + ] ^⊤[ 𝔼 + ]^⊤^⊤ = ( n/p+q[ p_1^⊤ + q_2^⊤; q_1^⊤ + p_2^⊤ ]^⊤ + Δ_3^⊤)( n/p+q[ p_1^⊤ + q_2^⊤; q_1^⊤ + p_2^⊤ ] + Δ_3 ) = n^2/(p+q)^2[ p_1 + q_2 q_1 + p_2 ][ p_1^⊤ + q_2^⊤; q_1^⊤ + p_2^⊤ ] + Δ_3 = n^2/(p+q)^2[ (p^2+q^2)(_1_1^⊤ + _2_2^⊤) + (2pq)(_1_2^⊤ + _2_1^⊤) ] + Δ_3 (a)=n^2/(p+q)^2[ 2(p^2+q^2)_B() + (2pq)( 2_B() - 4_B() ) ] + Δ_3 = 2n^2 [ _B() - 4pq/(p+q)^2_B() ] + Δ_3 Where Δ_3 = ( ^⊤^⊤ + [ 𝔼] ^⊤^⊤ + ^⊤[ 𝔼] ) ^⊤ and the equality (a) is based on equation <ref>. §.§ Trace formulation of risk Now, note that the risk can be formulated in terms of matrix traces as follows: ^'() = 1/2NTr{( _2^* - )( _2^* - )^⊤} + λ_W_2/2Tr{_2^*_2^*⊤} + λ_H/2Tr{^⊤} Where the term ( _2^* - )( _2^* - )^⊤ can be expanded as follows: ( _2^* - )( _2^* - )^⊤ = _2^*^⊤^⊤_2^*⊤ - _2^*^⊤ - ^⊤^⊤_2^*⊤ + ^⊤ Since _2^*[ ^⊤^⊤ + λ_W_2N] = ^⊤^⊤, we can multiply _2^*⊤ on both sides and get: _2^*^⊤^⊤_2^*⊤ = ^⊤^⊤_2^*⊤ - λ_W_2N_2^*_2^*⊤ Using these simplifications and matrix trace properties, the risk can be modified as: = 1/2NTr{( _2^* - )( _2^* - )^⊤} + λ_W_2/2Tr{_2^*_2^*⊤} + λ_H/2Tr{^⊤} = 1/2NTr{ - _2^*^⊤ + ^⊤} + λ_H/2Tr{^⊤} = 1/2NTr{ - ^⊤^⊤[ ^⊤^⊤ + λ_W_2N]^-1^⊤ + ^⊤} + λ_H/2Tr{^⊤} = 1/2NTr{ - ^⊤^⊤^⊤[ ^⊤^⊤ + λ_W_2N]^-1} + 1/2NTr{^⊤} + λ_H/2Tr{^⊤} Where the covariance matrix formulations for ^⊤^⊤, ^⊤^⊤^⊤ can be leveraged to formulate the risk as: ^'() = -1/2NTr{[ 2n^2 ( _B() - 4pq/(p+q)^2_B() ) + Δ_3 ] [ 2n ( _B() - 4pq/(p+q)^2_B() ) + Δ_2 + λ_W_2N]^-1} + 1/2 + λ_H/2Tr{N_T() } §.§ Trace evolution of covariance matrices Now, we analyze the traces of d _W/dt, d _B/dt along the gradient flow: d_t/dt = - ∇^'(_t). Let ∂_kjl represent the derivative of l^th entry of _k,j. For notational simplicity, we also consider _W = _W(), _B = _B(), _B = _B(), _T = _T(). This leads to: ∂_kjl_B = 1/2n( _l ( _k - _G )^⊤ + ( _k - _G )^⊤_l ) ∂_kjl_B = 1/2n( _l _k^⊤ + _k^⊤_l ) ∂_kjl_W = 1/2n( _l ( _k,j - _k )^⊤ + ( _k,j - _k )^⊤_l ) ∂_kjl_T = 1/2n( _l _k,j^⊤ + _k,j^⊤_l ) Now, considering = 2n ( _B - 4pq/(p+q)^2_B ), we formulate ∂_kjl^'() as: ∂_kjl^'() = -n/2NTr{∂_kjl( [ + Δ_3/n][ + Δ_2 + λ_W_2N]^-1) } + Nλ_H/4n(2^⊤_l_k,j) Since C=2 in our analysis, the derivative expands into: ∂_kjl^'() = -1/4Tr{∂_kjl( + Δ_3/n)[ + Δ_2 + λ_W_2N]^-1} - 1/4Tr{[ + Δ_3/n] ∂_kjl( [ + Δ_2 + λ_W_2N]^-1) } + λ_H^⊤_l_k,j Where the second term can be expanded as: 1/4Tr{[ + Δ_3/n] ∂_kjl( [ + Δ_2 + λ_W_2N]^-1) } = - 1/4Tr{[ + Δ_3/n] [ + Δ_2 + λ_W_2N]^-1∂_kjl( + Δ_2 )[ + Δ_2 + λ_W_2N]^-1} = - 1/4Tr{[ + Δ_2 + λ_W_2N]^-1[ + Δ_3/n] [ + Δ_2 + λ_W_2N]^-1∂_kjl( + Δ_2 )} Now, by expanding ∂_kjl() in terms of covariance matrix derivatives, we get: ∂_kjl() = ∂_kjl 2n ( _B - 4pq/(p+q)^2_B ) = ( _l _k^⊤ + _k^⊤_l ) -4pq/(p+q)^2( _l ( _k - _G )^⊤ + ( _k - _G )^⊤_l ) = _l ( (p-q/p+q)^2_k + 4pq/(p+q)^2_G )^⊤ + ( (p-q/p+q)^2_k + 4pq/(p+q)^2_G )_l^⊤ This leads to the following formulation for ∂_kjl^'(): ∂_kjl^'() = -1/4Tr{[ + Δ_2 + λ_W_2N]^-1∂_kjl() } + 1/4Tr{[ + Δ_2 + λ_W_2N]^-1[ + Δ_3/n] [ + Δ_2 + λ_W_2N]^-1∂_kjl()} + λ_H^⊤_l_k,j + _kjl = -1/2Tr{[ + Δ_2 + λ_W_2N]^-1[ _l ( (p-q/p+q)^2_k + 4pq/(p+q)^2_G )^⊤] } + 1/2Tr{[ + Δ_2 + λ_W_2N]^-1[ + Δ_3/n] [ + Δ_2 + λ_W_2N]^-1. . ·[ _l ( (p-q/p+q)^2_k + 4pq/(p+q)^2_G )^⊤]} + λ_H^⊤_l_k,j + _kjl Where _kjl represents the remaining trace terms pertaining to the partial derivatives of Δ_2, Δ_3: _kjl =-1/4Tr{[ + Δ_2 + λ_W_2N]^-1∂_k,j,l(Δ_3/n) } + 1/4Tr{[ + Δ_2 + λ_W_2N]^-1[ + Δ_3/n] [ + Δ_2 + λ_W_2N]^-1∂_kjl(Δ_2 )} We now denote := [ + Δ_2 + λ_W_2N]^-1 to obtain: ∂_kjl^'() = -1/2( [ - [ + Δ_3/n] ][ (p-q/p+q)^2_k + 4pq/(p+q)^2_G ] -2λ_H_k,j)^⊤_l + _kjl Without loss of generality, since _kjl∈, we consider _kjl = _k,j^⊤_l where _k,j∈^d_L-1 is a random vector which represents the overall perturbation effect of _kjl. Note that the randomness is associated with the matrix in Δ_2, Δ_3. We can now represent ∂_kjl^'() = ⟨_k,j, _l ⟩, where: _k,j = -1/2( [ (p-q/p+q)^2_k + 4pq/(p+q)^2_G ] -2λ_H_k,j - 2_k,j) = [ - [ + Δ_3/n] ] ∙ Derivative of _B: By denoting _B(a,b) = _a^⊤_B _b as the (a,b)-element of _B, we use the above result for ∂_kjl^'() and the chain rule to compute d _B/dt along the flow (<ref>), as follows: d _B(a,b)/dt = ∑_k,j,l∂_k,j,l_B(a,b) d_k,j[l]/dt = ∑_k,j,l∂_k,j,l_B(a,b) ( -∂_k,j,l^'() ) = ∑_k,j∑_l - 1/2n( ⟨_a, _l ⟩⟨_b, _k - _G ⟩ + ⟨_a, _k - _G ⟩⟨_l, _b ⟩) ⟨_k,j, _l ⟩ = ∑_k,j - 1/2n( ⟨_a, _k,j⟩⟨_b, _k - _G ⟩ + ⟨_a, _k - _G ⟩⟨_k,j, _b ⟩) = 1/2n_a^⊤( ∑_k,j - _k,j( _k - _G )^⊤ - ( _k - _G ) _k,j^⊤) _b = 1/4n_a^⊤( ∑_k,j( [ (p-q/p+q)^2_k + 4pq/(p+q)^2_G ] -2λ_H_k,j - 2_k,j) ( _k - _G )^⊤)_b + 1/4n_a^⊤( ∑_k,j( _k - _G ) ( [ (p-q/p+q)^2_k + 4pq/(p+q)^2_G ] -2λ_H_k,j - 2_k,j)^⊤) _b For further simplification, let's consider the following term: ∑_k,j[ (p-q/p+q)^2_k + 4pq/(p+q)^2_G ][ _k - _G ]^⊤ = 1/(p+q)^2∑_k,j[ (p-q)^2_k + 4pq _G ][ _k - _G ]^⊤ = 1/(p+q)^2∑_k,j[ (p-q)^2_k_k^⊤ - (p-q)^2_k_G^⊤ + 4pq_G_k^⊤ - 4pq _G_G^⊤] = 1/(p+q)^2[ (p-q)^22n_B - 8npq_G - 2n_G( (p-q)^2 - 4pq ) ] = 1/(p+q)^2[ 2n(p-q)^2_B + 2n(p-q)^2_G - 8npq_G - 2n_G( (p-q)^2 - 4pq ) ] = 2n(p-q/p+q)^2_B Next, we proceed with the simplification of ∑_k,j-λ_H_k,j( _k - _G )^⊤ - λ_H( _k - _G )_k,j^⊤: -λ_H ( ∑_k,j_k,j( _k - _G )^⊤ + ( _k - _G )_k,j^⊤ ) = -λ_H (n _1( _1 - _G )^⊤ + n _2( _2 - _G )^⊤ + n( _1 - _G ) _1^⊤ + n ( _2 - _G )_2^⊤) = -λ_H (2n_1_1^⊤ + 2n_2_2^⊤ - n_1_G^⊤ - n_2_G^⊤ -n_G_1^⊤ - n_G_2^⊤) = -λ_H ( 2n_1_1^⊤ + 2n_2_2^⊤ - 4n_G_G^⊤) = -λ_H (4n_B - 4n_G) = -4nλ_H_B These results now simplify d _B/dt as: d _B/dt = 1/4n_a^⊤( 2n(p-q/p+q)^2_B + 2n(p-q/p+q)^2_B - 8nλ_H_B - _B )_b Where _B = 2∑_k,j_k,j( _k - _G )^⊤ + ( _k - _G )_k,j^⊤. ∙ Derivative of _W: By denoting _W(a,b) = _a^⊤_W _b as the (a,b)-element of _W, we use a similar line of analysis as above to compute d _W/dt as follows: d _W(a,b)/dt = ∑_k,j,l∂_k,j,l_W(a,b) d_k,j[l]/dt = ∑_k,j,l∂_k,j,l_W(a,b) ( -∂_k,j,l^'() ) = ∑_k,j∑_l - 1/2n( ⟨_a, _l ⟩⟨_b, _k,j - _k ⟩ + ⟨_a, _k,j - _k ⟩⟨_l, _b ⟩) ⟨_k,j, _l ⟩ = ∑_k,j - 1/2n( ⟨_a, _k,j⟩⟨_b, _k,j - _k ⟩ + ⟨_a, _k,j - _k ⟩⟨_k,j, _b ⟩) = 1/2n_a^⊤( ∑_k,j - _k,j( _k,j - _k )^⊤ - ( _k,j - _k ) _k,j^⊤) _b = 1/4n_a^⊤( ∑_k,j( [ (p-q/p+q)^2_k + 4pq/(p+q)^2_G ] -2λ_H_k,j - 2_k,j) ( _k,j - _k )^⊤)_b + 1/4n_a^⊤( ∑_k,j( _k,j - _k ) ( [ (p-q/p+q)^2_k + 4pq/(p+q)^2_G ] -2λ_H_k,j - 2_k,j)^⊤) _b For further simplification, let's consider the following term: ∑_k,j[ (p-q/p+q)^2_k + 4pq/(p+q)^2_G ][ _k, j - _k]^⊤ = ∑_k [ (p-q/p+q)^2_k + 4pq/(p+q)^2_G ][ ∑_j ( _k, j - _k) ]^⊤ = Next, we simplify ∑_k,j-λ_H_k,j( _k, j - _k )^⊤ - λ_H( _k, j - _k )_k,j^⊤ as follows: -λ_H∑_k,j( _k,j( _k, j - _k )^⊤ + ( _k, j - _k )_k,j^⊤) = -λ_H∑_k,j( _k,j_k, j^⊤ - _k,j_k^⊤ + _k, j_k,j^⊤ - _k_k,j^⊤) = -λ_H∑_k,j( _k,j_k, j^⊤ - _k,j_k^⊤ + _k, j_k,j^⊤ - _k_k,j^⊤ + _k_k^⊤ - _k_k^⊤) = -λ_H( 2n _W + 2n_T - 2n_B ) = -4nλ_H_W Where the last inequality is based on the fact that _T = _W + _B. These results simplify d _W/dt as: d _W/dt = 1/4n_a^⊤( - 8nλ_H_W - _W )_b Where _W = 2∑_k,j_k,j( _k,j - _k )^⊤ + (_k,j - _k )_k,j^⊤. ∙ Trace of covariance matrices along the flow: Taking the derivative of Tr(_W) gives us: d Tr(_W)/dt = -2λ_HTr(_W) -1/4nTr( _W ) = -2λ_HTr(_W) -1/nTr( ∑_k,j_k,j( _k,j - _k )^⊤) Similarly, the derivative of Tr(_B) gives us: d Tr(_B)/dt = - 2λ_HTr(_B) + 1/4nTr( 2n(p-q/p+q)^2_B + 2n(p-q/p+q)^2_B - _B ) = - 2λ_HTr(_B) + 1/2nTr( 2n(p-q/p+q)^2_B - 2∑_k,j_k,j( _k - _G )^⊤) =- 2λ_HTr(_B) + Tr((p-q/p+q)^2_B ) - 1/nTr( ∑_k,j_k,j( _k - _G )^⊤) = Tr( [ (p-q/p+q)^2 - 2λ_H]_B ) - 1/nTr( ∑_k,j_k,j( _k - _G )^⊤) Observe that for small enough perturbation matrix , formally for <E for sufficiently small E, the sign of d Tr(_W)/dt and d Tr(_B)/dt depends on the first term in each of them, and not on the second term that depends on . Therefore, since λ_H>0, we have that Tr(_W) decreases. It is left to show that Tr(_B) increases in this regime. Since the trace of the product of a positive definite matrix and a non-zero positive semidefinite matrix is positive by Von-Neumann trace inequality, we aim for conditions that allow [ (p-q/p+q)^2 - 2λ_H] to be positive definite. First, observe that is symmetric: = [ - [ + Δ_3/n] ] = [ + Δ_2 + λ_W_2N]^-1 = 2n ( _B - 4pq/(p+q)^2_B ) Since _B, _B, , Δ_2 and Δ_3 are symmetric. Observe that 4pq/(p+q)^2≤ 1, because 0 ≤ (p-q)^2 = p^2 +2pq + q^2 - 4pq = (p+q)^2 - 4pq. Thus, = 2n ( _B - 4pq/(p+q)^2_B ) ≥ 2n ( _B - _B ) = 2n _G ≥ 0 Thus also >0 for small Δ_2. Note also that [ + λ_W_2N]^-1 <. Therefore, for small enough (and thus small Δ_2,Δ_3), we have that = - [ + Δ_3/n] > 0. Thus, [ (p-q/p+q)^2 - 2λ_H] is positive definite when: 2λ_H < (p-q/p+q)^2 λ_min( ) Here λ_min( ) represents the smallest eigenvalue of . § PROOF OF THEOREM <REF> Let's begin by calculating the expected value and covariance of features ^(l), which are obtained after the graph convolution operation based on equation <ref>. ∙ Case c=1: We begin by considering the features of a node belonging to class c=1 as follows: _1, i^(l) = _1^*(l)_1,i^(l-1) + _2^*(l)^(l-1)_t_1,i By expanding the ^(l-1)_t term based on neighbors from all classes, we get: _1, i^(l) = _1^*(l)_1,i^(l-1) + _2^*(l)( ∑_v_1, j∈𝒩_1(v_1,i)_1,j + ∑_v_2, j∈𝒩_2(v_1,i)_2,j/|𝒩(v_1,i)|) Now, by taking expectations on both sides with respect to features _1,i^(l-1) and structure _t, we get: 𝔼_, _1, i^(l) = _1^*(l)𝔼_, _1,i^(l-1) + _2^*(l)𝔼_, ( ∑_v_1, j∈𝒩_1(v_1,i)_1,j + ∑_v_2, j∈𝒩_2(v_1,i)_2,j/|𝒩(v_1,i)|) = _1^*(l)_1^(l-1) + _2^*(l)( np_1^(l-1) + nq_2^(l-1)/n(p+q)) = (_1^*(l) + p/p+q_2^*(l))_1^(l-1) + (q/p+q_2^*(l)) _2^(l-1) Similarly, the covariance 𝔼_, ( [ _1, i^(l) - 𝔼_, _1, i^(l)][ _1, i^(l) - 𝔼_, _1, i^(l)]^⊤) is based on: [ _1^*(l)_1,i^(l-1) + _2^*(l)^(l-1)_t_1,i - 𝔼_, _1, i^(l)] ·[ _1^*(l)_1,i^(l-1) + _2^*(l)^(l-1)_t_1,i - 𝔼_, _1, i^(l)]^⊤ = [ _1^*(l)(_1,i^(l-1) - _1^(l-1)) + _2^*(l)( ^(l-1)_t_1,i - p_1^(l-1) + q_2^(l-1)/p+q) ] ·[ _1^*(l)(_1,i^(l-1) - _1^(l-1)) + _2^*(l)( ^(l-1)_t_1,i - p_1^(l-1) + q_2^(l-1)/p+q) ]^⊤ = _1^*(l)(_1,i^(l-1) - _1^(l-1))(_1,i^(l-1) - _1^(l-1))^⊤_1^*(l)⊤ + _1^*(l)(_1,i^(l-1) - _1^(l-1))( ^(l-1)_t_1,i - p_1^(l-1) + q_2^(l-1)/p+q)^⊤_2^*(l)⊤ + _2^*(l)( ^(l-1)_t_1,i - p_1^(l-1) + q_2^(l-1)/p+q)(_1,i^(l-1) - _1^(l-1))^⊤_1^*(l)⊤ + _2^*(l)( ^(l-1)_t_1,i - p_1^(l-1) + q_2^(l-1)/p+q)( ^(l-1)_t_1,i - p_1^(l-1) + q_2^(l-1)/p+q)^⊤_2^*(l)⊤ The expectation of the term _1^*(l)(_1,i^(l-1) - _1^(l-1))(_1,i^(l-1) - _1^(l-1))^⊤_1^*(l)⊤ is given by: 𝔼_, [ _1^*(l)(_1,i^(l-1) - _1^(l-1))(_1,i^(l-1) - _1^(l-1))^⊤_1^*(l)⊤] = _1^*(l)𝔼_, [(_1,i^(l-1) - _1^(l-1))(_1,i^(l-1) - _1^(l-1))^⊤]_1^*(l)⊤ = _1^*(l)_1^(l-1)_1^*(l)⊤ The expectation of _1^*(l)(_1,i^(l-1) - _1^(l-1))( ^(l-1)_t_1,i - p_1^(l-1) + q_2^(l-1)/p+q)^⊤_2^*(l)⊤ is given by the following: 𝔼_, [_1^*(l)(_1,i^(l-1) - _1^(l-1))( ^(l-1)_t_1,i - p_1^(l-1) + q_2^(l-1)/p+q)^⊤_2^*(l)⊤] = 𝔼_, [_1^*(l)_1,i^(l-1)( ^(l-1)_t_1,i - p_1^(l-1) + q_2^(l-1)/p+q)^⊤_2^*(l)⊤] - 𝔼_, [_1^*(l)_1^(l-1)( ^(l-1)_t_1,i - p_1^(l-1) + q_2^(l-1)/p+q)^⊤_2^*(l)⊤] = _1^*(l)𝔼_[ _1,i^(l-1)( p∑_j=1^n _1,j^(l-1) + q∑_j=1^n _2,j^(l-1)/n(p+q) - p_1^(l-1) + q_2^(l-1)/p+q)^⊤]_2^*(l)⊤ = _1^*(l)𝔼_[ _1,i^(l-1)( p/n(p+q)_1,i^(l-1)⊤) ]_2^*(l)⊤ + _1^*(l)𝔼_[ _1,i^(l-1)( p∑_j=1, i^n _1,j^(l-1) + q∑_j=1^n _2,j^(l-1)/n(p+q) - p_1^(l-1) + q_2^(l-1)/p+q)^⊤]_2^*(l)⊤ Since the features are independent draws from their normal distributions, we can simplify the expectation as follows: 𝔼_, [_1^*(l)(_1,i^(l-1) - _1^(l-1))( ^(l-1)_t_1,i - p_1^(l-1) + q_2^(l-1)/p+q)^⊤_2^*(l)⊤] = _1^*(l)[ p/n(p+q)(_1^(l-1) + _1^(l-1)_1^(l-1)⊤) ]_2^*(l)⊤ + _1^*(l)[ _1^(l-1)( (n-1)p_1^(l-1) + nq_2^(l-1)/n(p+q) - p_1^(l-1) + q_2^(l-1)/p+q)^⊤]_2^*(l)⊤ = _1^*(l)[ p/n(p+q)_1^(l-1)]_2^*(l)⊤ Similarly, the expectation of _2^*(l)( ^(l-1)_t_1,i - p_1^(l-1) + q_2^(l-1)/p+q)(_1,i^(l-1) - _1^(l-1))^⊤_1^*(l)⊤ is given by the following: 𝔼_, [_2^*(l)( ^(l-1)_t_1,i - p_1^(l-1) + q_2^(l-1)/p+q)(_1,i^(l-1) - _1^(l-1))^⊤_1^*(l)⊤] = 𝔼_, [_1^*(l)(_1,i^(l-1) - _1^(l-1))( ^(l-1)_t_1,i - p_1^(l-1) + q_2^(l-1)/p+q)^⊤_2^*(l)⊤]^⊤ = _2^*(l)[ p/n(p+q)_1^(l-1)]_1^*(l)⊤ Next, the expectation of: _2^*(l)( ^(l-1)_t_1,i - p_1^(l-1) + q_2^(l-1)/p+q)( ^(l-1)_t_1,i - p_1^(l-1) + q_2^(l-1)/p+q)^⊤_2^*(l)⊤ can be computed as follows: 𝔼_, [_2^*(l)( ^(l-1)_t_1,i - p_1^(l-1) + q_2^(l-1)/p+q) . . ·( ^(l-1)_t_1,i - p_1^(l-1) + q_2^(l-1)/p+q)^⊤_2^*(l)⊤] = 𝔼_, [_2^*(l)( ^(l-1)_t_1,i)( ^(l-1)_t_1,i - p_1^(l-1) + q_2^(l-1)/p+q)^⊤_2^*(l)⊤] - 𝔼_, [_2^*(l)( p_1^(l-1) + q_2^(l-1)/p+q)( ^(l-1)_t_1,i - p_1^(l-1) + q_2^(l-1)/p+q)^⊤_2^*(l)⊤] Where the second term reduces to 0. On expanding the first term, we get: 𝔼_, [_2^*(l)( ^(l-1)_t_1,i)( ^(l-1)_t_1,i - p_1^(l-1) + q_2^(l-1)/p+q)^⊤_2^*(l)⊤] = _2^*(l)𝔼_, [ ( ^(l-1)_t_1,i)( ^(l-1)_t_1,i)^⊤] _2^*(l)⊤ - _2^*(l)𝔼_, [ ( ^(l-1)_t_1,i)( p_1^(l-1) + q_2^(l-1)/p+q)^⊤] _2^*(l)⊤ = _2^*(l)𝔼_[ ( p∑_j=1^n _1,j^(l-1) + q∑_j=1^n _2,j^(l-1)/n(p+q))( p∑_j=1^n _1,j^(l-1) + q∑_j=1^n _2,j^(l-1)/n(p+q))^⊤] _2^*(l)⊤ - _2^*(l)[ ( p_1^(l-1) + q_2^(l-1)/p+q)( p_1^(l-1) + q_2^(l-1)/p+q)^⊤] _2^*(l)⊤ = _2^*(l)𝔼_[ ( p^2∑_j=1^n _1,j^(l-1)_1,j^(l-1)⊤ + q^2∑_j=1^n _2,j^(l-1)_2,j^(l-1)⊤/n^2(p+q)^2) ] _2^*(l)⊤ + _2^*(l)𝔼_[ ( p^2∑_j=1^n∑_j'=1, j^n _1,j^(l-1)_1,j'^(l-1)⊤ + q^2∑_j=1^n∑_j'=1, j^n_2,j^(l-1)_2,j'^(l-1)⊤/n^2(p+q)^2) ] _2^*(l)⊤ + _2^*(l)𝔼_[ ( pq∑_j=1^n∑_j'=1^n _1,j^(l-1)_2,j'^(l-1)⊤ + pq∑_j=1^n∑_j'=1^n _2,j^(l-1)_1,j'^(l-1)⊤/n^2(p+q)^2) ] _2^*(l)⊤ - _2^*(l)[ ( p_1^(l-1) + q_2^(l-1)/p+q)( p_1^(l-1) + q_2^(l-1)/p+q)^⊤] _2^*(l)⊤ Due to the independence of the features, we can simplify the expectation as follows: 𝔼_, [_2^*(l)( ^(l-1)_t_1,i)( ^(l-1)_t_1,i - p_1^(l-1) + q_2^(l-1)/p+q)^⊤_2^*(l)⊤] = _2^*(l)[ ( np^2(_1^(l-1) + _1^(l-1)_1^(l-1)⊤) + nq^2(_2^(l-1) + _2^(l-1)_2^(l-1)⊤) /n^2(p+q)^2) ] _2^*(l)⊤ + _2^*(l)[ ( p^2(n^2 - n)_1^(l-1)_1^(l-1)⊤ + q^2(n^2 - n)_2^(l-1)_2^(l-1)⊤/n^2(p+q)^2) ] _2^*(l)⊤ + _2^*(l)[ ( pqn^2_1^(l-1)_2^(l-1)⊤ + pqn^2_2^(l-1)_1^(l-1)⊤/n^2(p+q)^2) ] _2^*(l)⊤ - _2^*(l)[ ( p^2_1^(l-1)_1^(l-1)⊤ + pq_1^(l-1)_2^(l-1)⊤ + pq_2^(l-1)_1^⊤ + q^2_2^(l-1)_2^(l-1)⊤/(p+q)^2) ] _2^*(l)⊤ = _2^*(l)[ p^2_1^(l-1) + q^2_2^(l-1)/n(p+q)^2] _2^*(l)⊤ Putting these results together, the covariance 𝔼_, ( [ _1, i^(l) - 𝔼_, _1, i^(l)][ _1, i^(l) - 𝔼_, _1, i^(l)]^⊤) is: 𝔼_, ( [ _1, i^(l) - 𝔼_, _1, i^(l)][ _1, i^(l) - 𝔼_, _1, i^(l)]^⊤) = _1^*(l)_1^(l-1)_1^*(l)⊤ + p/n(p+q)_1^*(l)_1^(l-1)_2^*(l)⊤ + p/n(p+q)_2^*(l)_1^(l-1)_1^*(l)⊤ + _2^*(l)[ p^2_1^(l-1) + q^2_2^(l-1)/n(p+q)^2] _2^*(l)⊤ To summarize, the means and covariance matrices for ^(l) can be given by: _1^(l) = (_1^*(l) + p/p+q_2^*(l))_1^(l-1) + (q/p+q_2^*(l)) _2^(l-1) _1^(l) = _1^*(l)_1^(l-1)_1^*(l)⊤ + p/n(p+q)_1^*(l)_1^(l-1)_2^*(l)⊤ + p/n(p+q)_2^*(l)_1^(l-1)_1^*(l)⊤ + _2^*(l)[ p^2_1^(l-1) + q^2_2^(l-1)/n(p+q)^2] _2^*(l)⊤ ∙ Case c=2: The analysis presented above can be extended for c=2 in a straightforward fashion to get _2^(l), _2^(l) as follows: _2^(l) = (_1^*(l) + p/p+q_2^*(l))_2^(l-1) + (q/p+q_2^*(l)) _1^(l-1) _2^(l) = _1^*(l)_2^(l-1)_1^*(l)⊤ + p/n(p+q)_1^*(l)_2^(l-1)_2^*(l)⊤ + p/n(p+q)_2^*(l)_2^(l-1)_1^*(l)⊤ + _2^*(l)[ p^2_2^(l-1) + q^2_1^(l-1)/n(p+q)^2] _2^*(l)⊤ §.§ Modelling an increase/decrease in between-class variability Let _B^(l-1)=(_1^(l-1) - _2^(l-1))(_1^(l-1) - _2^(l-1))^⊤ indicate the between-class covariance matrix for features at layer l-1. Based on the equations <ref>, <ref>, we analyze _B(^(l)) to understand the effect of the convolution layer on feature separation. First, observe that: _B(^(l)) = (_1^(l) - _2^(l))(_1^(l) - _2^(l))^⊤ = ( _1^*(l) + p-q/p+q_2^*(l))( _1^(l-1) - _2^(l-1)) ·( _1^(l-1) - _2^(l-1))^⊤( _1^*(l) + p-q/p+q_2^*(l))^⊤ = ( _1^*(l) + p-q/p+q_2^*(l))_B(^(l-1))( _1^*(l) + p-q/p+q_2^*(l))^⊤ By taking the trace on both sides, we get: Tr(_B(^(l))) = Tr(( _1^*(l) + p-q/p+q_2^*(l))_B(^(l-1))( _1^*(l) + p-q/p+q_2^*(l))^⊤) = Tr(_B(^(l-1))( _1^*(l) + p-q/p+q_2^*(l))^⊤( _1^*(l) + p-q/p+q_2^*(l)) ) Where ( _1^*(l) + p-q/p+q_2^*(l))^⊤( _1^*(l) + p-q/p+q_2^*(l)) ∈^d_l-1× d_l-1 is a symmetric and positive semi-definite matrix. This matrix product formulation allows us to leverage the eigenvalue-based trace inequalities <cit.>. Formally, let _B = ( _1^*(l) + p-q/p+q_2^*(l))^⊤( _1^*(l) + p-q/p+q_2^*(l)). We now leverage Corollary.6 in <cit.>, and get the following inequality based on the eigenvalues of _B(^(l-1)), _B as: ∑_i=1^d_l-1λ_d_l-1 - i + 1( _B(^(l-1)))λ_i(_B) ≤Tr(_B(^(l-1))_B) ≤∑_i=1^d_l-1λ_i( _B(^(l-1)))λ_i(_B) Where λ_i(.) represents the i^th largest eigenvalue of a matrix. Additionally, based on the standard trace equality: Tr(_B(^(l-1))) = ∑_i=1^d_l-1λ_i( _B(^(l-1))), the increase/decrease in Tr(_B(^(l))) with respect to Tr(_B(^(l-1))) boils down to: ∑_i=1^d_l-1λ_d_l-1-i+1( _B(^(l-1)))λ_i(_B) /∑_i=1^d_l-1λ_i( _B(^(l-1)))≤Tr(_B(^(l)))/Tr(_B(^(l-1)))≤∑_i=1^d_l-1λ_i( _B(^(l-1)))λ_i(_B) /∑_i=1^d_l-1λ_i( _B(^(l-1))) §.§ Modelling an increase/decrease in within-class variability Let _W(^(l-1)) = 1/2( _1^(l-1) + _2^(l-1)) represent the within-class covariance matrix for features ^(l-1) in our balanced class setting. Similar to the previous analysis, we leverage the results in equations <ref>,<ref> to model _W(^(l)) as follows: _W(^(l)) = 1/2( _1^(l) + _2^(l)) = 1/2( _1^*(l)( _1^(l-1) + _2^(l-1)) _1^*(l)⊤) + 1/2(p/n(p+q)_1^*(l)( _1^(l-1) + _2^(l-1)) _2^*(l)⊤) + 1/2(p/n(p+q)_2^*(l)( _1^(l-1) + _2^(l-1)) _1^*(l)⊤) + 1/2(_2^*(l)[ (p^2 + q^2)( _1^(l-1) + _2^(l-1)) /n(p+q)^2] _2^*(l)⊤) By taking trace on both sides, we get: Tr( _W(^(l)) ) = Tr( _W(^(l-1)) _1^*(l)⊤_1^*(l)) + Tr(p/n(p+q)_W(^(l-1)) _2^*(l)⊤_1^*(l)) + Tr(p/n(p+q)_W(^(l-1)) _1^*(l)⊤_2^*(l)) + Tr( (p^2 + q^2)/n(p+q)^2_W(^(l-1)) _2^*(l)⊤_2^*(l)) = Tr( _W(^(l-1)) [_1^*(l)⊤_1^*(l) + p/n(p+q)[ _2^*(l)⊤_1^*(l) + _1^*(l)⊤_2^*(l)] . . . .+ (p^2 + q^2)/n(p+q)^2_2^*(l)⊤_2^*(l)] ) Let _W = _1^*(l)⊤_1^*(l) + p/n(p+q)[ _2^*(l)⊤_1^*(l) + _1^*(l)⊤_2^*(l)] + (p^2 + q^2)/n(p+q)^2_2^*(l)⊤_2^*(l). Then, observe that _W ∈^d_l-1× d_l-1 is symmetric and positive semi-definite. To this end, the increase/decrease in Tr(_W(^(l))) with respect to Tr(_W(^(l-1))) boils down to: ∑_i=1^d_l-1λ_d_l-1-i+1( _W(^(l-1)))λ_i(_W) /∑_i=1^d_l-1λ_i( _W(^(l-1)))≤Tr(_W(^(l)))/Tr(_W(^(l-1)))≤∑_i=1^d_l-1λ_i( _W(^(l-1)))λ_i(_W) /∑_i=1^d_l-1λ_i( _W(^(l-1))) § ADDITION EXPERIMENTS ∙ Infrastructure details: We perform experiments on a virtual machine with 8 Intel(R) Xeon(R) Platinum 8268 CPUs, 32GB of RAM, and 1 Quadro RTX 8000 GPU with 32GB of allocated memory. Our Python package `gnn_collapse' leverages PyTorch 1.12.1 and PyTorch-Geometric (PyG) 2.1.0 frameworks. For reproducible experiments and consistency with previous research, we extend the SBM generator by <cit.> and NC metrics by <cit.>. §.§ Experiments with GNNs to track penultimate layer features ∙ Datasets: We consider a variety of SSBM graph datasets as follows: D1: C=2, N=1000, p=0.025, q=0.0017, K=1000 D2: C=4, N=1500, p=0.072, q=0.0048, K=1000 ∙ GNNs: In our experiments, we empirically track the NC metrics of penultimate layer features during training for both the GNN designs ψ_Θ^, ψ_Θ^'. The number of layers is set to 32 and the hidden dimension is set to 8 across layers for datasets with C=2 and set to 16 for datasets with C=4. ∙ Optimization: The GNNs are trained for 8 epochs using stochastic gradient descent (SGD) with momentum 0.9, weight decay 5 × 10^-4, and a learning rate set to 0.004 for D1 and 0.006 for D2. ∙ Observations: Figures <ref>, <ref> illustrate the training loss, overlap and all the NC metrics that we defined in our setup for ψ_Θ^, ψ_Θ^' on dataset D1. Note that when C=2, the re-centering of the 2 class-means by subtracting the global mean, always leads to separation with maximal angle, irrespective of the configuration of the non-centered class-means. Thus, we skip the corresponding _2 plots for , when C=2. Additionally, we can observe similar trends in NC metrics from Figures <ref>, <ref> even after increasing N, C in dataset D2. Additionally, in all these experiments, notice that _2, _3 metrics do not show a significant reduction. In this context, a reduction indicates that a simplex equiangular tight-frame (simplex ETF) or an orthogonal frame (OF) is the desired configuration for weights and penultimate layer feature (re-centered) class-means. This behaviour can be linked to the presence of in the risk formulation. However, our understanding of the role of in determining these alignments towards simplex ETF or an OF is still unclear and would be a valuable future effort. §.§ Experiments with UFM to model penultimate layer behavior To improve our understanding of the empirical behavior of GNNs and to validate our theoretical results with the gUFM, we prepare datasets based on two strategies: * Case C^+: This case represents a graph that satisfies condition C. Without loss of generality, we leverage the expected degrees of nodes and consider t_cc = ⌈ n*p ⌉, c ∈ [C] and t_cc' = ⌈ n*q ⌉, c c' ∈ [C]. Based on our notation, recall that Ω_c, Ω_c' represents the set of nodes belonging to classes(communities) c, c' ∈ [C] respectively. To this end, observe that the following conditions should be satisfied[We utilize NetworkX python libraries to generate SSBM graphs that satisfy these conditions.]: * The sub-graph formed by nodes Ω_c should be t_cc-regular, for all c ∈ [C]. * The bipartite sub-graph formed by Ω_c, Ω_c' should be t_cc'-regular, for all c, c' ∈ [C]. * Case C^-: This case represents a graph that does not satisfy condition C. Since any random graph sampled from SSBM(N, C, p, q) satisfies this requirement with a high probability (as per theorem 3.2), we simply use this randomly sampled graph. As a simple sanity check, one can verify if the sampled graph satisfies the condition C and re-sample. Especially, we consider the ^+,^- variants of SSBM graphs with following parameters: D1: C=2, N=1000, p=0.025, q=0.0017, K=10 D2: C=4, N=1500, p=0.072, q=0.0048, K=10 ∙ Optimization: The gUFMs are trained using plain gradient descent for 50000 epochs with a learning rate of 0.1 and L2 regularization parameters λ_W_1 = λ_W_2 = λ_H = 5 × 10^-3.[λ_W_1 is not applicable for gUFM based on ψ_Θ^'. ] ∙ Observations: Figures <ref>, <ref>, and <ref>, <ref> illustrate the training loss, overlap and the NC metrics for the gUFM acting on ^- variants of datasets D1, D2 respectively. Although gUFM is an optimistic mathematical model, we can observe a close resemblance of the values and trends of NC metrics to those of the penultimate layer features of the actual GNNs. This observation is justified as any random SSBM graph fails to satisfy condition with a high probability. To this end, observe from Figures <ref>, <ref>, <ref>, <ref> that when graphs satisfy condition , the NC1 metrics tend to reduce drastically (for gUFM designs based on ψ_Θ^, ψ_Θ^' and C=2,4). Thus proving our theoretical results. Furthermore, observe from Figures <ref>, <ref> that the (re-centered) class means for , tend to align very closely to a simplex ETF and tend to converge at such a configuration. Based on our previous observations for GNN training, we underscore this observation for gUFM and emphasize that a rigorous theoretical analysis on the role of in determining the structures of can be a crucial future effort. §.§ Experiments with GNNs and Spectral Methods The main focus of this section is to emphasize the differences between power iterations-based spectral methods and GNNs. To this end, we consider the following datasets and GNNs as follows: D1: C=2, N=1000, p=0.025, q=0.0017, K(train)=1000, K(test)=100 D2: C=2, N=1000, p=0.0017, q=0.025, K(train)=1000, K(test)=100 ∙ GNNs: In our experiments, we empirically track the NC metrics of penultimate layer features during training for both the GNN designs ψ_Θ^, ψ_Θ^'. The number of layers is varied based on L=64,128 and the hidden dimension is set to 8 across layers. ∙ Spectral methods: The number of projected power-iterations are varied based on L=64,128 for a fair comparison with GNNs. ∙ Optimization: The GNNs are trained for 8 epochs using stochastic gradient descent (SGD) with learning rate of 0.004, momentum 0.9 and a weight decay of 5 × 10^-4. ∙ Observations: For the dataset 1 with homophilic graphs, we first plot the training metrics for GNNs ψ_Θ^, ψ_Θ^' with L=64 in Figures <ref>, <ref> respectively and ensure that they reach TPT. Now, from Figures <ref>, <ref> observe that the ratios Tr(_B(^(l)))/Tr(_B(^(l-1))), Tr(_W(^(l)))/Tr(_W(^(l-1))) tend to be constant throughout the power iterations for spectral methods, whereas, the GNNs behave differently as Tr(_B(^(l)))/Tr(_B(^(l-1))), Tr(_W(^(l)))/Tr(_W(^(l-1))) tend to decrease across depth. We make similar observations for GNNs ψ_Θ^, ψ_Θ^' with L=128 in Figures <ref>,<ref>,<ref>,<ref> and note that the behaviour is the same as L=32 case illustrated in the main text. However, when considering the dataset 2 with heterophilic graphs, even though the GNNs reach TPT during training (Figures <ref>,<ref>,<ref>,<ref> ), the evolution of ratios Tr(_B(^(l)))/Tr(_B(^(l-1))), Tr(_W(^(l)))/Tr(_W(^(l-1))) tends to differ especially for the GNN ψ_Θ^' when L=64 (Figures <ref>,<ref>) and L=128 (Figures <ref>,<ref>). Thus, highlighting the empirical role of depth in GNN design which requires further investigations in future efforts.
http://arxiv.org/abs/2307.03233v1
20230706180010
Compilation of a simple chemistry application to quantum error correction primitives
[ "Nick S. Blunt", "György P. Gehér", "Alexandra E. Moylett" ]
quant-ph
[ "quant-ph" ]
[email protected] Riverlane, St Andrews House, 59 St Andrews Street, Cambridge, CB2 3BZ, UK A number of exciting recent results have been seen in the field of quantum error correction. These include initial demonstrations of error correction on current quantum hardware, and resource estimates which improve understanding of the requirements to run large-scale quantum algorithms for real-world applications. In this work, we bridge the gap between these two developments by performing careful estimation of the resources required to fault-tolerantly perform quantum phase estimation (QPE) on a minimal chemical example. Specifically, we describe a detailed compilation of the QPE circuit to lattice surgery operations for the rotated surface code, for a hydrogen molecule in a minimal basis set. We describe a number of optimisations at both the algorithmic and error correction levels. We find that implementing even a simple chemistry circuit requires 900 qubits and 2,300 quantum error correction rounds, emphasising the need for improved error correction techniques specifically targeting the early fault-tolerant regime. Compilation of a simple chemistry application to quantum error correction primitives Alexandra E. Moylett August 1, 2023 ==================================================================================== § INTRODUCTION Quantum error correction (QEC), the study of how many noisy physical qubits are used to represent a smaller number of less noisy logical qubits, has seen significant recent developments in a number of directions. One such success is experimental demonstrations of error correction successfully suppressing errors on a real-world quantum device <cit.>. Another recent development is in careful resource estimates, which have allowed for more accurate estimates of the resources a quantum computer requires to solve problems of significant interest, from estimating chemical properties <cit.> to factoring RSA integers <cit.>. These developments together have helped define both the current state of our abilities to suppress noise on quantum devices, and where we need to get to in order to solve key industrial problems. There are some natural next steps following the experimental demonstration of a logical quantum memory. Natural follow-ups include implementing basic logical gates: implementing Pauli gates through transversal operations, non-Pauli Clifford gates through lattice surgery techniques <cit.>, and non-Clifford gates initially through error mitigation techniques <cit.> and later through magic state distillation <cit.>. Eventually, a natural goal will be to demonstrate small-scale quantum algorithms, showing that these logical operations can be used to solve a toy application. Understanding the resources required for such an algorithm is important for knowing the point at which small applications can start being solved on fault-tolerant quantum computers, as well as helping us understand the constant factors in the scaling of large quantum algorithms. A number of algorithms have recently been proposed that are aimed specifically at this regime, referred to as “early fault-tolerant” algorithms <cit.>; it is therefore particularly relevant to assess how challenging even minimal applications will be to perform using fault-toleration operations. In this work, we estimate the resources required for implementing a small quantum algorithm on a fault tolerant quantum computer, including detailed consideration of how to perform each required operation using lattice surgery. The application we choose is quantum phase estimation (QPE) applied to finding the ground-state energy of the hydrogen molecule. This application is sufficiently small that related circuits without QEC have already been successfully run on current quantum hardware <cit.>. We investigate optimisations of this algorithm at a variety of levels, including algorithmic <cit.>, gate decompositions <cit.>, compilation to lattice surgery primitives <cit.> and generation of magic states <cit.>. Our final resource estimates are presented in Table <ref>, looking at different physical error rates and techniques which trade off time and space resource requirements. It is worth noting that when implemented on the surface code, even this small application requires hundreds of physical qubits and thousands of QEC rounds. This shows the significant prefactor associated with quantum error correction, and suggests that in early fault-tolerance further techniques will be required to yield small-scale algorithmic demonstrations <cit.>. The rest of this paper is laid out as follows. In Section <ref>, we review the algorithms and chemical system to be considered, and present the logical quantum circuit. In Section <ref>, we describe how to decompose the logical quantum circuits into operations from the Clifford+T gate set and how to implement these gates on the surface code using lattice surgery primitives. In Section <ref> we estimate the overhead introduced by quantum error correction. Finally, we conclude with some open questions and further directions for research in Section <ref>. § LOGICAL QUANTUM CIRCUIT §.§ Quantum phase estimation We begin with a brief introduction to the quantum algorithms considered in this study, which are two types of quantum phase estimation (QPE) <cit.>. QPE is one of the key proposed quantum algorithms for calculating ground and excited-state energies in electronic structure problems. Provided an initial trial state can be prepared that has a sufficiently good overlap with the true ground state (which is usually the case for molecular systems), QPE is capable of obtaining energy estimates to a desired precision in polynomial time with system size. However, the algorithm requires high circuit depths for non-trivial examples, and so has seen less attention compared to variational quantum algorithms in current NISQ applications. For fault-tolerant applications, however, it is often regarded as the algorithm of choice. We focus on the “textbook” <cit.> and iterative (semi-classical) QPE algorithms <cit.>. The textbook QPE algorithm is perhaps the best known QPE approach, the circuit for which is presented in Fig. <ref>. The algorithm allows one to measure the eigenphases of some unitary U up to m bits of precision; doing so requires m ancilla qubits, in addition to the n data qubits needed to represent U. At the end of the circuit, an inverse quantum Fourier transform (QFT) is performed and the ancilla qubits are measured. If the input state |ψ⟩ is an exact eigenstate of U, then the measured bits will yield the bits of the corresponding eigenphase. For a non-exact |ψ⟩, the probability of obtaining the desired phase will depend on the overlap between |ψ⟩ and the corresponding exact eigenstate. The inverse QFT can also be performed in a semi-classical manner <cit.>. Using such a semi-classical QFT, the resulting phase estimation algorithm is performed iteratively, obtaining one bit of information about the phase from each iteration. We refer to this approach as iterative quantum phase estimation <cit.>. Iterative QPE has many of the benefits of the textbook approach, including a Heisenberg-limited running time 𝒪(ϵ^-1) for a precision of ϵ, but has the significant benefit that it uses only a single ancilla qubit. We briefly give some analysis of the iterative QPE approach here. We are interested in estimating the eigenvalues of a Hamiltonian H = ∑_j=1^L c_j P_j, where P_j are n-qubit Pauli operators and c_j are coefficients. We denote the eigenvalues and eigenvectors of H by {λ_j ; |Ψ_j⟩}. We multiply H by a constant t such that -0.5 ≤λ_j t ≤ 0.5 for all j, which can always be achieved by choosing 1 / t = 2∑_j |c_j|. We then work with the unitary U = e^2 π i H t. The eigenvalues of U are e^2 π i ϕ_j, where the range 0 ≤ϕ_j ≤ 1 can be chosen. It is then simple to obtain λ_j t from ϕ_j, which only differ due to the wrapping of phases; the normalization of H t above is chosen to avoid potential ambiguity in this wrapping. Therefore each ϕ_j can be written in binary as ϕ_j = 0.ϕ_j1ϕ_j2…ϕ_jm…. In iterative QPE the bits ϕ_jk are measured directly using the circuit in Fig. <ref>. The circuit is performed for m iterations in order to obtain m bits of precision for ϕ_j, starting with k=m and iterating backwards to k=1. After each controlled-unitary operation, an R_z(ω_k) gate is applied to the ancilla with angle ω_k = -π(0.x_k+1x_k+2… x_m), which depends on the measurement results from previous iterations (and ω_m = 0 in the initial iteration). The data qubits are prepared in an initial state |ψ⟩, which should be an approximation to the exact state whose energy is to be estimated. We write |ψ⟩ in the eigenbasis of H by |ψ⟩ = ∑_j ν_j |Ψ_j⟩. The state of the qubits before the first measurement (k=m) is then 1/2∑_j ν_j [ (1 + e^i 2^m πϕ_j) |0⟩ + (1 - e^i 2^m πϕ_j) |1⟩] ⊗ |Ψ_j⟩. Consider the simple case where ϕ_j can be represented by exactly m bits, so that ϕ_j = 0.ϕ_j1ϕ_j2…ϕ_jm 0 0 …. In this case 2^m πϕ_j = πϕ_jm exactly, and the state of the system before measurement is 1/2∑_j ν_j [ (1 + e^i πϕ_jm) |0⟩ + (1 - e^i πϕ_jm) |1⟩] ⊗ |Ψ_j⟩. Thus the probabilities of measuring the ancilla as 0 or 1 are P_0 = ∑_j |ν_j|^2 cos^2 ( πϕ_jm/2), P_1 = ∑_j |ν_j|^2 sin^2 ( πϕ_jm/2). Provided that |ν_j| is sufficiently large for the desired state |Ψ_j⟩, the desired bit will be measured with high probability. The measurement will also project away the contribution from those states |Ψ_j⟩ for which ϕ_jm does not match the measured result. It is simple to continue this process for subsequent iterations to k=1. After the final iteration, the probability that all of the bits for the desired ϕ_j were measured is |ν_j|^2. Therefore, for a sufficiently good initial state, and a sufficient number of repetitions, the ground-state energy can be measured with high probability. Further clear analysis is given in <cit.>. In addition to the textbook and iterative QPE methods, there has been recent progress on statistical phase estimation methods <cit.>. Compared to the above approaches, such statistical methods allow shorter circuit depth <cit.> and ready combination with error mitigation techniques <cit.>, in exchange for performing many circuits. It has been suggested that these methods are particularly appropriate for early-fault tolerant quantum computers. We do not consider such methods here, but note that they would be interesting to investigate further in the context considered here. §.§ Hamiltonian simulation via Trotterization In this section we briefly discuss first and second-order Trotterization, and present an optimization to the latter. We consider n-qubit Hamiltonians of the form of Equation <ref>. We specifically denote H_j = c_j P_j, so that H = ∑_j=1^L H_j. In Trotter schemes more generally, each H_j might correspond to a sum of commuting Pauli terms, rather than a single Pauli contribution. We are concerned with implementing an operator U = e^iHt, controlled on an ancilla qubit. The well-known first- and second-order Trotter approximations, U_1 and U_2, are U_1 = ∏_j=1^L e^i H_j t and U_2 = ∏_j=1^L e^i H_j t/2∏_j=L^1 e^i H_j t/2, which have errors 𝒪(t^2) and 𝒪(t^3) compared to the exact U, respectively. Let us consider the number of single-qubit rotations needed to implement the controlled U_1 and U_2 unitaries, as required to perform QPE. Each controlled Pauli rotation, which we shall denote by W_j, can be rewritten as W_j = |0⟩⟨ 0| ⊗1 + |1⟩⟨ 1| ⊗exp(i θ_j P_j), = exp(i θ_j |1⟩⟨ 1 | ⊗ P_j), = exp(i (θ_j/2) (1 - Z) ⊗ P_j), = exp(i (θ_j/2) 1⊗ P_j) exp(-i (θ_j/2) Z ⊗ P_j), which is a product of two multi-qubit Pauli rotations. These can be reduced to a single-qubit rotation each after conjugation through an appropriate Clifford <cit.>. Therefore the cost of each controlled Pauli rotation is 2 single-qubit rotations plus Cliffords, and the number of single-qubit rotations for U_1 is 2L per Trotter step. At first glance it appears that for a given t, the second-order formula requires 4L single-qubit rotations to implement. In fact in QPE circuits this is not the case, and the second-order formula can also be implemented with 2L rotations, as for the first-order formula, but with better error suppression. This trick was introduced in Ref. <cit.>, and is known as directionally-controlled phase estimation. It was expanded on in Refs. <cit.> and also used in Ref. <cit.>. We briefly give a derivation of the directionally-controlled approach. The general procedure is presented in Fig. <ref>. We consider the controlled time evolution operator in Fig. <ref>(a). The state of the qubits at the end of this circuit is | Ψ⟩ = 1/√(2)(|0⟩⊗ | ψ⟩ + |1⟩⊗ e^iHt | ψ⟩). Now, note that we can apply e^-iHt/2 to the data qubits in Fig. <ref>(a) without affecting any measurement outcomes; since this operator commutes with all controlled-e^iHt gates, it can be moved to the end of the circuit where it has no effect on the measurement of the ancilla. With this additional operator applied, the final state of the qubits is | Ψ⟩ = 1/√(2)( |0⟩⊗ e^-iHt/2 | ψ⟩ + |1⟩⊗ e^iHt/2 | ψ⟩), and we see that we can work with circuit in Fig. <ref>(b) instead. We next expand e^iHt/2 via its Trotter formula, e^iHt/2≈ V_K … V_2 V_1, where K is the number of terms in the Trotter product formula, equal to L for the first-order formula and 2L for second-order formula. Then, |Ψ⟩ = |0⟩⊗ (V_K … V_2 V_1)^† |ψ⟩ + |1⟩⊗ (V_K … V_2 V_1) |ψ⟩ = |0⟩⊗ V_1^† V_2^†… V_K^† |ψ⟩ + |1⟩⊗ V_K … V_2 V_1|ψ⟩. For even-order Trotter formulas the string of operators V_K … V_2 V_1 is symmetric, so that V_j=V_K-j+1, and the expansion is unchanged when the order of the terms is reversed. Therefore, for the second-order Trotter formula (but not the first-order formula) we can write |Ψ⟩ = |0⟩⊗ V_K^†… V_2^† V_1^† |ψ⟩ + |1⟩⊗ V_K … V_2 V_1|ψ⟩, which is equivalent to the circuit in Fig. <ref>(c). Lastly, note that the paired operators in Fig. <ref>(c) can each be expressed as e^i |1⟩⟨ 1| ⊗ H_i t/4 e^-i |0⟩⟨ 0| ⊗ H_i t/4 = e^-i Z ⊗ H_i t / 4, which can be reduced to a rotation on a single qubit plus Clifford gates. Therefore, application of the second-order Trotter formula in QPE can be performed with 2L rotations, which is equal to the number required for the first-order Trotter formula. In addition, the Trotter expansion is applied to the operator e^iHt/2 instead of e^iHt, resulting in lower Trotter error. §.§ The hydrogen molecule We next define the Hamiltonian that we will consider throughout this paper. As an application of QPE, we will consider the common task of finding the ground-state energy of an electronic structure Hamiltonian. Such a Hamiltonian can be defined in second-quantized form as H = h_0 + ∑_pq h_pq a_p^† a_q + 1/2∑_pqrs h_pqrs a_p^† a_q^† a_s a_r, where p, q, r and s label spin orbitals. The coefficient h_0 defines the nuclear-nuclear contribution, and h_pq and h_pqrs are one- and two-body integrals, respectively. The form of these integrals are well known from quantum chemistry <cit.>. In this paper we are concerned with compiling a minimal chemistry problem to lattice surgery operations, including visualization of the patch layout. We therefore consider the hydrogen molecule H_2 in a STO-3G basis, which is a prototypical minimal molecular example, consisting of 2 electrons in 2 spatial orbitals, or 4 spin orbitals. We use an equilibrium geometry with an internuclear distance of 0.7414 Å. The fermionic Hamiltonian in Equation <ref> must be mapped to a qubit Hamiltonian for use in QPE. Because the minimal basis for H_2 consists of 4 spin orbitals, direct mappings will result in a Hamiltonian with 4 qubits. However, as shown by Bravyi et al. <cit.>, the qubit Hamiltonian for this problem can be reduced to just a single-qubit operator. This can be seen from symmetry arguments; the H_2 Hamiltonian commutes with spin and particle-number operators, and also has spatial symmetry. Each of these symmetries allows one qubit to be tapered. More precisely, labelling the bonding and antibonding orbitals as ψ_g and ψ_u, and ordering the spin orbitals as ψ_g↑, ψ_g↓, ψ_u↑, ψ_u↓, the only determinants that contribute to the ground-state wave function are | 1100 ⟩ and | 0011 ⟩, and these two states can be represented by a single qubit. A more general approach for tapering qubits due to ℤ_2 symmetries is given in Ref. <cit.>. The Hamiltonian used takes the form H = c_1 Z + c_2 X, with c_1 = 0.78796736 and c_2 = 0.18128881, and we have neglected the identity contribution. Note that an identical qubit Hamiltonian was considered in Ref. <cit.>, which performed textbook QPE on a neutral-atom quantum computer. §.§ Overall logical circuit Using the second-order Trotter formula techniques described in Section <ref>, we derive logical circuits for both textbook and iterative quantum phase estimation. We choose a time step t = π/(c_1+c_2), where c_1 and c_2 are defined in Equation <ref>, in order to ensure that eigenvalues of H t are in the range [ -π, π ]. In this simple application, we take just a single time step in the Trotter expansion of e^iHt. We also perform QPE for just three bits of accuracy in the energy. These simplifications will of course lead to large errors in the final energy estimate; indeed, after removing rescaling factors, using three bits of precision means that the energy can only be estimated to precision (c_1 + c_2)/4 = 0.242 Ha. Here, we are primarily interested in understanding the required circuits in terms of lattice surgery primitives. Increasing the number of Trotter steps, or bits of precision, does not provide further insight beyond increasing the circuit depth (and number of ancilla qubits, in the case of textbook QPE). To implement e^-ic_1 Z ⊗ Zt/4 and e^-ic_2 Z ⊗ Xt/2 we use rotation operations R_Z⊗ Z and R_Z ⊗ X, defining R_P(θ) = e^-iPθ/2. Thus we have rotation angles θ_1 = tc_1/2 and θ_2 = tc_2 for the Z ⊗ Z and Z ⊗ X rotations, respectively. We also make a minor optimisation by combining pairs of R_Z ⊗ Z(θ_1) rotations into a single R_Z ⊗ Z(2θ_1) rotation where possible. Figures for the logical circuits are provided in Appendix <ref>. For both iterative and textbook QPE circuits, the gates can be grouped into three types: Pauli gates such as the X gate, non-Pauli Clifford gates such as the Hadamard and S^† gates, and non-Clifford gates such as the two-qubit rotations and T^† gates. These different types of gates require different techniques to be implemented on the surface code, which we shall detail further in Section <ref>. § IMPLEMENTING LOGICAL GATES In this section we discuss how to implement the logical circuits presented in Section <ref> using operations available on the surface code. First, we approximately decompose the logical gates into a sequence of Clifford and T gates. Then we consider two potential methods for implementing these gates: in Section <ref>, we implement the Clifford and T gates directly using native lattice surgery operations; whereas in Section <ref>, we use commutation relations to remove Clifford operations from the circuit, at the cost of needing to implement more general T-like operations. §.§ Decomposition to Clifford and T gates Ideally we would want to implement logical quantum operations transversely on our error-correcting code, applying the operation to each physical qubit(s) in turn. Unfortunately, the Eastin-Knill theorem shows that this is not possible for any quantum error-correcting code <cit.>. In the case of the surface code, the logical gates which can be implemented transversely are single-qubit Pauli gates if the code distance is odd. Other gates within the Clifford group can be implemented on the surface code via lattice surgery operations such as patch deformation <cit.>, but non-Clifford gates such as the T gate cannot be implemented in an error-corrected fashion. However, it is possible to approximately decompose an arbitrary unitary operation into a sequence consisting of Clifford gates and the single-qubit T gate. This was shown for arbitrary gates originally using the Solvay-Kitaev theorem <cit.>, and a number of improvements have been subsequently shown for both single- and multi-qubit gates <cit.>. In the case of the QPE circuits in Section <ref>, the circuits contain a mixture of Clifford and non-Clifford gates. As Clifford gates can be implemented on the surface code via lattice surgery and patch-deformation techniques, such as the ones that we shall describe in Section <ref>, we only need to decompose the non-Clifford gates. Both the textbook and iterative QPE circuits consist of a series of two-qubit Pauli rotations as part of the Trotter expansion. In textbook QPE, there are controlled phase gates after the two-qubit rotations, to implement the inverse quantum Fourier transform. In iterative QPE, the two-qubit rotations are followed by (classically conditioned) single-qubit phase gates to implement a semi-classical version of the inverse Fourier transform. We decompose the non-Clifford gates in two steps. First, we exactly compile the two-qubit operations into Clifford gates and single-qubit Z rotations and phase gates using circuit identities presented in Figure <ref>. Second, we use the software package to approximately decompose the single-qubit Z rotations into sequences of one-qubit Clifford and T gates <cit.>. Note that the two-qubit Z ⊗ Z rotations require a different decomposition to the controlled phase gates, due to differences in local phases. In comparison, single-qubit Z rotations are equivalent to single-qubit phase gates up to a global phase R_Z(θ) = e^-iθ/2P(θ), and can therefore be decomposed using the same techniques. An example of using to approximately decompose a single-qubit Z rotation into the Clifford and T gate set is provided in Figure <ref>. There is a trade-off to be made between the accuracy of decompositions generated by and the number of gates required. approximates a single rotation R_Z(θ) up to error ϵ in the operator norm with typically 3log_2(1/ϵ) + O(log(log(1/ϵ))) non-Clifford gates <cit.>. To get an understanding of how this extends to a whole circuit, we ran simulations of the textbook and iterative QPE circuits with decompositions of varying accuracy, from 1 bit to 32 bits. For each number of bits of accuracy, we generate 1000 circuits with the single-qubit rotations decomposed to that degree of accuracy, and simulate each circuit 10,000 times. The results are presented in Figure <ref>. In Figure <ref>, we take the total variation distance between the output distributions of the decomposed circuits with that of the perfect QPE circuit. From this we see that for both textbook and iterative QPE, the total variation distance reduces quickly to approximately 1.7% at 10 bits of precision per gate decomposition, but tails off beyond this value. This is due to finite precision used when estimating the total variation distance from samples. In the following results, we choose 10 bits of precision for the decomposition of phase gates, as it provides sufficient overall total variation distance for purpose of this circuit. We also present the number of gates required for each gate decomposition accuracy in Figure <ref>. For 10 bits of precision, there are approximately 1,300 and 1,000 logical gates for textbook and iterative QPE, respectively. Fewer logical gates can also be used at the cost of increased error; for example, fewer than 1,000 logical gates can be achieved with 5 bits of precision per rotation: 870 gates for textbook QPE, and 740 gates for iterative QPE. The total variation distance at 5 bits of precision is 23%. The results in Figure <ref> also show that for this particular circuit iterative QPE requires fewer gates than textbook QPE regardless of decomposition accuracy. This is due to the fact that the inverse QFT step of textbook QPE requires two-qubit controlled phase rotations around fixed angles θ. These are subsequently decomposed into smaller rotations θ/2 and -θ/2, which are then approximately decomposed using . In comparison, iterative QPE works with single qubit phase rotations θ which are classically controlled. As these single angles are larger than those used for the single-qubit rotations in textbook QPE, fewer gates are required to decompose them up to a desired accuracy. For this particular QPE circuit, which is only performed to three bits of accuracy, the smallest rotation angle required beyond the Hamiltonian simulation step in the iterative QPE circuit is -π/4, which can be implemented as a single T^† gate. Hence for the rest of this paper we shall primarily focus on iterative QPE. Finally, for anyone curious to see an example of the complete logical circuit, we have included example QASM circuits in the supplementary material to this paper <cit.>, including an iterative QPE circuit with phase gates decomposed up to 10 bits of precision. This circuit features a total of 1029 operations, of which there are 13 X gates, 169 Z gates, 34 CNOT gates, 411 Hadamard gates, 13 S/S^† gates, 386 T/T^† gates, and three measurements in the Z basis. This is the circuit we will estimate the resources for in Section <ref>. Note that is a randomised process, and so different runs might produce different gate decompositions than presented here. §.§ Directly implementing Clifford gates Next, we consider methods to treat Clifford gates in the logical circuit. These can either be applied directly, or can be moved to the end of the logical circuit <cit.>. In this section we first discuss the time and space cost of directly implementing both Clifford and T gates. Both of these estimates will be calculated in terms of the code distance d. The approach of moving Clifford operations will then be considered in Section <ref>. The simplest gates to implement on the surface code are single-qubit Pauli gates. These operations can be implemented by either applying the corresponding Pauli gate to all data qubits if the distance d is odd, or, if the distance d is even, by tracking their values in software. Due to their simplicity, we shall not focus on how to implement them in this section. It is important to note that not every Clifford gate presented in Section <ref> will be directly implemented. Any sequence consisting of only Z, S/S^†, and T/T^† gates can be implemented at the cost of implementing a single T gate (as shown in Appendix <ref>). Thus we can think of the sequences of gates generated by such as those shown in Figure <ref> as equivalent to sequences of alternating Hadamards and T-like gates. Before discussing how to implement non-Pauli gates, we present how our logical qubits are arranged on a quantum processor with nearest-neighbour connectivity. For iterative QPE, we have two logical qubits, each of which is represented by a d × d patch. The primary lattice surgery operations we utilise are for implementing joint Z ⊗ Z measurements. We arrange our logical qubits as d × d patches such that performing joint measurements with the horizontal observable is easy. We also introduce two additional spaces of d × d data qubits, which can be used as both routing space for performing joint measurements with the vertical operator, and for additional qubits required for implementing logical gates. We have the layout in Figure <ref>, which for distance d uses a total of (2d+2)^2 data qubits, or 2(2d+2)^2 physical qubits including those used for measurement. §.§.§ CNOT gate A CNOT gate between a control qubit c and target qubit t can be implemented based on two-qubit joint Pauli measurements <cit.>, see Figure <ref>. Namely, an auxiliary qubit a is initialised in the |+⟩ state, followed by two joint measurements: Z_c ⊗ Z_a and X_t ⊗ X_a. Finally, the auxiliary qubit is measured out in the Z basis, and Pauli corrections are applied based on the outcomes. This operation can be implemented on our patches via the protocol shown in Figure <ref>. In Figure <ref>, we use the routing space to initialise an additional patch in the |+⟩ state. We then use a merge-and-split operation between the horizontal boundaries of the control patch and the auxiliary patch to perform the Z ⊗ Z measurement, and at the same time grow and shrink the target patch to move it into the routing space. Next, we use another merge-and-split operation between the vertical boundaries of the target patch and the auxiliary patch to implement the X ⊗ X measurement. Finally, we measure out the auxiliary patch and at the same time use patch growing and shrinking to move the target qubit back to its original space. The remaining Pauli operations can either be applied transversely at the start of the next operation if the distance d is odd, or simply tracked in software if the patch distance d is even. The operations for growing and joining patches require d QEC rounds in order to protect the code from both qubit and measurement errors, the operations for splitting and shrinking patches as well as the single-qubit logical X measurement each require a single QEC round, and the Pauli operations at the end of the circuit are effectively free, meaning a total of 3d + 4 QEC rounds are required to implement the CNOT gate. §.§.§ Hadamard gate The Hadamard gate is a Clifford gate whose role is to swap the X and Z observables of a qubit. Naïvely, this can be achieved on a surface code patch by applying a Hadamard operation transversely to all data qubits on the patch, as shown in Figure <ref>. However, this has the side-effect of swapping the X and Z stabilisers as well as the logical observables, resulting in a different patch to the one we started with and making joint patch operations such as those used for the CNOT in Section <ref> more complicated. This effect of swapping the stabilisers can be seen by comparing the patches in Figures <ref> and <ref>. If we rotated the patch by 90 degrees around the central data qubit after applying the transversal Hadamard gates, then we would have implemented the logical Hadamard gate. However, this is not possible on a physical device. Instead, we use a patch deformation technique, which we present in Figures <ref>-<ref>, to achieve the same effect <cit.>. First in Figure <ref>, we grow the patch into a longer one with length 2d + 1. At the same time we move the corner at the top right in the original patch to the top left in the longer patch. Next in Figure <ref>, we use patch deformation to move the corner on the bottom-right up to the top-right. At this stage the logical observables have changed directions from vertical to horizontal and vice versa. Next, we shrink the patch down in Figure <ref>. Now we have the X and Z logical observables swapped, with the stabilisers in their original positions, but the whole patch has been shifted upwards. To move this patch back to its original position, we start by growing and shrinking the patch in Figures <ref> and <ref>, but this leaves the patch one row of stabilisers higher than it originally was. To correct this, we use two rounds of SWAP gates to swap the data qubits with neighbouring measurement qubits, as shown in Figure <ref>. The most expensive parts of this process are the stages that involve patch growing and corner movement, which require d QEC rounds each. Since in general two-qubit gates are much noisier than one-qubit gates, the transversal Hadamard at the start of this sequence does not require any QEC rounds. Finally, patch shrinking and transversal SWAP gates each require a single QEC round, thus requiring a total of 3d + 4 QEC rounds. §.§.§ S/S^† gate The S gate, also known as the √(Z) gate, is a Clifford gate that applies a phase of i to the |1⟩ state. Like the Hadamard gate, this gate also cannot be implemented transversely on the surface code. There are various ways of implementing the S gate using patch deformation, similarly to implementing the Hadamard in Section <ref> <cit.>. However, these require extending the X observable of a patch, and therefore require moving the patch into the routing space and back. Instead, we consider a different technique, which uses an additional patch in the |Y_+⟩ = (|0⟩ + i|1⟩)/√(2) state <cit.>. We then perform a joint Z ⊗ Z measurement between this qubit and our qubit, and measure this auxilliary qubit in the X basis. Finally, we apply a Z correction depending on the outcomes of the two measurements. A circuit describing this operation is presented in Figure <ref>. Note that unlike Z and X basis states, the |Y_+⟩ state cannot be generated in a single QEC round. Instead, we utilise a different technique to generate Y basis states in d/2 + 2 rounds with no additional qubits <cit.>. With this additional patch, we can implement the logical S gate using the process described in Figure <ref>. Generating the |Y_+⟩ state in Figure <ref> takes d/2 + 2 QEC rounds, the joint measurement in Figure <ref> takes d QEC rounds, and measuring the |Y_+⟩ state in the X basis takes a single QEC round. Any Z correction can be applied in software at no additional cost, so the total number of QEC rounds required is 3d/2 + 3. Finally, note that the S^† = SZ gate can also be implemented at no additional cost, by simply inverting the conditions under which the Z correction is applied. §.§.§ T/T^† gate The T gate is a non-trivial gate to implement on the surface code as it cannot be performed either transversely or via lattice surgery operations such as patch deformation. Instead, we introduce an auxiliary qubit initialised in the |T⟩=(|0⟩ + e^iπ/2|1⟩)/√(2) state. With this |T⟩ state prepared, we can implement the T gate using techniques similar to those for implementing the S gate in Section <ref>. The circuit is presented in Figure <ref> <cit.>. First, we perform a joint Z ⊗ Z measurement between the data qubit and the auxiliary qubit. Next, we perform an S gate conditioned on the result of this measurement outcome. Finally, we measure out the auxiliary qubit in the X basis, and depending on this measurement outcome apply a final Z gate to the data qubit. However, while the |Y_+⟩ state can be prepared on the surface code in a fault-tolerant way in d/2 + 2 QEC rounds, the |T⟩ state cannot be prepared on the surface code in an error-corrected fashion, and thus additional work is required in order to prepare a high-quality |T⟩ state. We shall detail this further in Section <ref>. We can implement this circuit on our patch layout using the process shown in Figure <ref>. Note that the patch for the |T⟩ state is not stored in the routing space like the |Y_+⟩ is in Figure <ref>. This is because unlike the |Y_+⟩ state, the |T⟩ state cannot be generated in a fault tolerant process, and instead needs to be generated elsewhere and stored outside of the routing space until it is required. Also note that the patch for the |T⟩ state in Figure <ref> is rotated compared to the patches for our data qubits, such that the vertical observable on the auxiliary patch matches the horizontal observable on our data patches. We use this to perform a joint Z ⊗ Z measurement between our auxiliary patch and our data patch via merge-and-split operations in Figure <ref>. Finally, in Figure <ref> we measure our auxiliary patch in the X basis, and at the same time we potentially apply an S correction using the methods described in Section <ref>. As with the CNOT gate presented in Section <ref>, the Z operation is effectively free as it can be either tracked in software or implemented transversely. The joint measurement requires d QEC rounds, the X measurement requires a single QEC round, and the S correction requires 3d/2 + 3 QEC rounds, leading to a total of 5d/2 + 4 QEC rounds to implement a logical T gate. Finally, it is worth noting that other sequences of gates can also be implemented using these techniques with no extra cost. In general, any sequence consisting of only T/T^†, S/S^†, and Z gates can be implemented using the protocol above at the cost of implementing a single T gate. Further details are provided in Appendix <ref>. §.§ Moving Clifford gates In this section we will consider another way of implementing the logical circuit from Section <ref> on the surface code based on <cit.>. This technique offers the benefit of only needing to think about how to implement the non-Clifford gates, but at the cost of increasing the complexity of implementing such gates. §.§.§ Pauli product rotations The key to this implementation method is that the logical gates we want to implement can be realised as rotations in a particular single- or multi-qubit Pauli basis. More formally, an n-qubit quantum gate can be implemented as a sequence of rotations R_P_j(θ_j) = e^-iP_jθ_j/2 for suitably chosen P_j ∈{I, X, Y, Z}^⊗ n and θ_j [An astute reader may notice that our definition of R_P_j(θ_j), and by extension our translation of Clifford and non-Clifford gates to Pauli rotations, differs from that in Ref. <cit.> by a factor of 2. This is to ensure correct periodicity, such that R_P(θ + 2π) = R_P(θ) ∀ P, θ. This is also consistent with the definition of Pauli rotations in other texts such as, for example, <cit.>]. The simplest example of this phenomenon is the Pauli-gates themselves, which can be implemented as P = R_P(π). Similarly, the T and S gates are both single-qubit rotations in the Z basis, and can thus be realised as T = R_Z(π/4) and S = R_Z(π/2), respectively. Single-qubit Pauli measurements, although not rotations around a Pauli basis, can also be seen as operations which project a state into a Pauli basis. In the case of QPE for example, measurements project a state into the Z basis. The remaining gates to translate into this picture are the CNOT and Hadamard gates. Although not as easy to see as the gates listed above, both of these gates can be implemented as sequences of Pauli π/2 rotations given in Figure <ref> <cit.>. The Hadamard gate, which can be written as H = (X + Z)/√(2), can be decomposed as H = R_Z(π/2) · R_X(π/2) · R_Z(π/2), up to a global phase. The CNOT can be written as a joint π/2 Z ⊗ X rotation, followed by a -π/2 Z rotation on the control qubit, and a -π/2 X rotation on the target qubit. This is similar to the circuit used in Figure <ref>, but with Pauli π/2 rotations rather than Pauli measurements. §.§.§ Moving Pauli rotations The benefit of describing operations as rotations in a Pauli basis is that it becomes easier to understand how to transform them without modifying the outcome of the circuit. For example, in Figure <ref>, a π/2 rotation in the X basis is moved past a π/4 rotation in the Z basis. The result is that the Z rotation is transformed into a π/4 rotation in the iXZ = i(-iY) = Y basis. These transformations can be applied more generally as well, the rules for which we discuss in Appendix <ref>. The benefit of these transformations to the circuit is that we can move all π and π/2 Pauli rotations, which correspond to Pauli and Clifford operations, past the final measurement operation of the circuit. Operations beyond this point do not affect the outcome of our circuit, and therefore do not need to be implemented. Thus we have reduced our circuit to only involving π/4 Pauli rotations, which correspond to a generalisation of T gates, and joint Pauli measurements. We shall now look at how to implement these more general operations. §.§.§ Implementing π/4 joint Pauli rotations First we shall show how to reduce the π/4 joint Pauli rotations to joint Pauli measurements. These will then be implemented using a particular patch layout and lattice surgery operations in Section <ref>. A circuit for implementing π/4 rotations is presented in Figure <ref>. This can be seen as a generalisation of the T gate circuit in Figure <ref>, where now the single-qubit Z basis has been replaced with a general multi-qubit basis P. The auxiliary qubit required for this operation is the same |T⟩ = (|0⟩ + e^iπ/4|1⟩)/√(2) state from Section <ref>. Because the rotation basis has generalised, so too have the corrective gates performed after the measurement. Now, instead of single-qubit S and Z gates we have more general π/2 and π rotations in an arbitrary Pauli basis P. The implementation of the π rotation is still a Pauli operation, and can be either tracked in software or implemented transversely as before. As for the π/2 rotation, one can account for this by employing the same techniques as described in Section <ref> in an online fashion, moving the rotation past the final round of measurements to effectively remove it from the circuit and adjusting the subsequent operations accordingly <cit.>. §.§.§ Implementing joint Pauli measurements Finally we discuss how to implement general Pauli measurements between patches on a surface code. The specific arrangement we use is given in Figure <ref>. Note that this patch has more routing space than the one in <ref>, this is because the more general operations require access to both the horizontal and vertical observables of the patches. This results in six logical patches arranged on a grid of (3d + 4) × (2d + 2) data qubits, or 2(3d + 4) × (2d + 2) physical qubits total. The most challenging operations to implement are those which include the Y basis of a qubit. This is because the Y basis does not correspond to the horizontal or vertical observable on a surface code patch, but is instead a product of both the horizontal and vertical observables. One option is to decompose π/4 rotations which involve the Y basis of a qubit into a sequence of π/4 and π/2 rotations which only act on the X and Z bases <cit.>. However, doing so introduces π/2 rotations which cannot be moved past the π/4 rotation without reintroducing the Y basis, so such rotations would need to be implemented. Instead, we utilise another technique from <cit.> to implement Y basis measurements directly via lattice surgery operations. Some example measurements for implementing Pauli π/4 rotations in the Y ⊗ X and Z ⊗ Y bases are given in Figure <ref>. These joint measurements require d QEC rounds, followed by a single QEC round to measure the auxiliary patch in the X basis. These two sets of measurement results give us the corrections to move past future operations. Here we utilise some lattice surgery techniques not used in Section <ref>. First, we add weight-five stabilisers, known as twist defects, which involve a Y Pauli term on one of the qubits. To ensure the surrounding stabilisers commute with the twist defects, we utilise two other lattice surgery techniques: first, we add domain walls, which are denoted by half-blue-half-grey squares and act as a combination of X and Z stabilisers; and second, we add elongated weight-four stabilisers, which are denoted by blue and grey rectangles. It is important to note that although these techniques allow for direct implementation of joint measurements involving the Y basis, there is an additional cost in that measuring these longer stabilisers requires additional connectivity compared to the layout used in Section <ref>. These extra connections between measurement qubits are not uniform, and shown by arrows in Figure <ref>. In general, for distance d a total of 4d extra connections are required for implementing this algorithm, which connect four columns of adjacent measurement qubits. § ERROR CORRECTION OVERHEADS We are now ready to discuss the cost of implementing these logical gates on the surface code. There are two primary sources of error to account for, which can be treated independently: first, errors from generating |T⟩ states, which we shall explore in Section <ref>; and second, errors from a logical failure on a qubit, which we shall explore in Section <ref>. §.§ Generating |T⟩ states Both of the methods used in Section <ref> require additional qubits initialised in the |T⟩ state. It is possible to initialise a surface code patch into an arbitrary qubit state |ψ⟩, by initialising one data qubit of the patch in the |ψ⟩ state, followed by d rounds of measurements <cit.>. However, initialising a data qubit into an arbitrary state means that this qubit is initially unprotected from errors, so this method cannot be implemented in a way that reduces the logical error probability below the physical error probability. In fact, it can be shown that there is no fault-tolerant way of initialising non-stabiliser states such as the |T⟩ state on the surface code [Note that there are other ways of initialising a |T⟩ state on the surface code which are more immune to errors <cit.>, however these rely on post-selection and therefore might introduce additional overheads. For simplicity we shall not focus on this method.]. Even though patches cannot be initialised in the |T⟩ state in a way that suppresses errors, it is possible to use distillation protocols to reduce the error probability of |T⟩ states. These protocols take multiple noisy |T⟩ states and output a smaller number of |T⟩ states with a reduced error probability <cit.>. For example, if it is possible to generate 15 |T⟩ states each with error probability p, it is possible to distill these into a single |T⟩ state with error probability 35p^3 <cit.>. It is also possible to concatenate these factories to reduce the error probability even further. For example, if the 15-to-1 protocol is used to generate 15 |T⟩ states each with error probability 35p^3, these can then be used in another 15-to-1 protocol to generate a single |T⟩ state with error probability 35(35p^3)^3 = 1,500,625p^9 <cit.>. The cost with these protocols is that reducing the error probability requires additional resources in terms of both time and number of qubits. A summary of several protocols and their associated costs is provided in Ref. <cit.>. When choosing a suitable protocol, there are multiple factors that we need to consider. First, we need to consider the overall logical failure probability from faulty |T⟩ state generation. This means that if our logical circuit uses m T gates – and therefore requires m |T⟩ states – we need to choose a probability of distilled state failure p_dist such that m × p_dist is within our error bounds. The second aspect we need to consider is the time required to generate each |T⟩ state. In order to avoid logical qubits remaining idle as we wait for |T⟩ states to be generated, we need to ensure that |T⟩ states are generated fast enough that they are available as and when they are needed. This depends on both the number of QEC rounds required to generate the |T⟩ states, but also the number of QEC rounds required to implement these logical operations. If we implement Clifford gates directly as described in Section <ref>, the circuit primarily consists of alternating sequences of Hadamard gates, which take 3d + 4 QEC rounds, and T-like gates, which take between d + 1 and 5d/2 + 4 QEC rounds, depending on whether or not an S gate correction is required. This means that when implementing Clifford gates directly, a |T⟩ state needs to be produced at least once every 4d + 5 QEC rounds. In comparison, when Clifford operations have been moved through the circuit as described in Section <ref>, the only operations required are a single joint Pauli measurement and a single X basis measurement, meaning that a |T⟩ state must be produced every d+1 QEC rounds. If a single distillation protocol cannot generate states fast enough, multiple instances of the protocol can be run in parallel to generate states more frequently, at the cost of increasing the number of physical qubits <cit.>. As we show in Appendix <ref>, up to four factories can be placed around the two corners at the top of the routing space. It is possible to add even more factories beyond these four, but doing so could require additional space for routing and storage of |T⟩ states. On the other hand, if a logical |T⟩ state can be generated faster than required, additional storage space is required to protect the state from errors while it waits to be consumed, which can be included as part of the routing space estimates. §.§ Estimating code distance To reduce the probability of a logical error occurring on one of our logical qubits, we can tweak the code distance d. A higher distance will reduce the probability of getting a sequence of physical errors which lead to a logical error, but comes at the cost of increasing both the number of physical qubits per logical qubit, and the number of QEC rounds per logical operation. In the case of the surface code, the probability of a logical error on a single logical qubit per code cycle assuming a depolarising noise model can be estimated as p_L(p, d) = 0.1(100p)^(d+1)/2, where p is the physical error probability <cit.>. For the purpose of this application, we want to choose a sufficiently high d that the probability of a logical error occurring on any qubit during any QEC round is within our error bound. We use Equation <ref> to approximate our probability of a logical error at any point in the computation as (n_data + n_route) × n_meas× p_L(p, d), where n_data is the number of surface code patches for our data qubits, n_route is the number of additional patches used for routing [Note that at various points of the computation these routing patches are unused. This means that errors occurring on them will not lead to an overall failure. However, in the worst case a logical error occurs on one of these patches while it is in use, which can in turn lead to a failure of the overall computation, hence why we include both data patches and routing patches in this calculation.], and n_meas is the number of QEC rounds. Given these parameters and physical error probability p, we can pick a distance by choosing an appropriate d such that Equation <ref> is within our target failure probability. §.§ Results We are now ready to estimate error correction overheads for our iterative quantum phase estimation circuit. As a recap, our circuit consists of 13 X gates, 169 Z gates, 34 CNOT gates, 411 Hadamard gates, 13 S/S^† gates, 386 T/T^† gates, and three Z basis measurements. As previously described, X and Z gates are free as they can be implemented transversely at the start of a QEC round. Of the S and S^† gates, one is used in a sequence of T gates, and can therefore be implemented as a T-like gate. This leaves our costing as 411 Hadamard gates, 34 CNOT gates, 386 T-like gates, 12 S/S^† gates, and three measurements. We also need to make assumptions on the error correction requirements of our algorithm. For consistency with other work we shall assume physical errors correspond to depolarising noise with a physical error probability of either 10^-3 or 10^-4 <cit.>. We also assume a target failure probability of 1%, though a higher target probability can be used to reduce overheads <cit.>. §.§.§ Cost of directly implementing Clifford gates Using the estimates described in Section <ref>, we note that there are four logical patches to consider when estimating code distance. In terms of time requirements, CNOT and Hadamard gates require 3d + 4 rounds, S/S^† gates require 3d/2 + 3 rounds, T-like gates require up to 5d/2 + 4 rounds, and Z basis measurements require a single round. This brings our total number of rounds to 2318d + 3363. Using Equation <ref>, we find that for a physical error probability of 10^-3, distance d=12 achieves a logical error probability of 3.9 × 10^-3, requiring 1,352 physical qubits for the patches and 31,179 QEC rounds. For this distance, a |T⟩ state needs to be produced once every 53 rounds, with an error probability of 1.6 × 10^-5. This is a higher error probability than what is often seen from a lot of distillation techniques <cit.>, so instead we look for smaller factories which still fit within our target failure probability. Using the code from <cit.>, we find that for a physical error probability of 10^-3, a 15-to-1 factory with X distance 11, Z distance 5, and measurement distance 5 produces a |T⟩ state with error probability 8.1 × 10^-6 on average once every 31.3 QEC rounds, meaning a single factory is sufficient. This factory uses 2,066 physical qubits, along with 288 physical qubits for storing |T⟩ states. Combined with our 1,352 qubits for the logical circuit and routing, this leads to a total of 3,706 physical qubits. The additional logical qubit for storing |T⟩ states increases the probability of a logical error to 4.9 × 10^-3, leading to a total error probability of 8.1 × 10^-3. For a physical error probability of 10^-4, an error probability of 6.0 × 10^-3 can be achieved with distance d=5, requiring 14,953 QEC rounds and 288 physical qubits. A |T⟩ state needs to be generated every 25 rounds with an error probability of 1.0 × 10^-5. For this physical error probability, we find that a 15-to-1 factory with X distance 5, Z distance 3 and measurement distance 3 produces a |T⟩ state on average every 18.05 rounds with error probability 4.7 × 10^-6. We use a single factory, which requires 522 physical qubits, along with 50 physical qubits for storing |T⟩ states. Adding in our 288 qubits for the logical circuit and routing, this gives us a total of 860 physical qubits. The additional storage space for data qubits increases the probability of a data qubit logical error to 7.5 × 10^-3, leading to a total error probability of 9.3 × 10^-3. §.§.§ Cost of moving Clifford gates If we choose to move Clifford gates through the circuit, we are left with a total of 386 Pauli π/4 rotations, each of which requires d+1 QEC rounds, and three joint Pauli measurements, which require d QEC rounds each. Therefore our total number of QEC rounds is 389d + 386. We also have six logical patches allocated for both the logical circuit and routing. For a physical error probability of 10^-3, distance d = 10 achieves a logical error probability of 5.4 × 10^-3, requiring 1,496 physical qubits and 4,276 QEC rounds. A |T⟩ state needs to be produced once every 11 QEC rounds with a failure probability of at most 1.2 × 10^-5. We use the same 15-to-1 factory as in Section <ref>, however a single factory is not sufficient for producing one |T⟩ state every 11 rounds. Instead, we use three factories, which produce a single |T⟩ state on average once every 10.4 QEC rounds and require 6,198 physical qubits for implementing the factories. However, the additional three logical qubits for storing |T⟩ states increases the probability of a logical error to 1.2%, which is outside of our target failure probability. To reduce this, we increase the distance to d = 11, which results in a probability of a logical error on data and storage qubits of 4.2 × 10^-3, leading to a total error probability of 7.3 × 10^-3. This increases the number of rounds to 4,665, the number of physical qubits for logical patches and routing to 1,776, and the number of physical qubits for storing up to three |T⟩ states to 726. Combined with the 6,198 physical qubits for generating |T⟩ states, this brings the total number of physical qubits to 8,700. For a physical error probability of 10^-4, distance d=5 achieves a logical error probability of 9.3 × 10^-4 at a cost of 2,331 QEC rounds and 456 physical qubits. At this distance, a |T⟩ state needs to be produced with error probability 2.3 × 10^-5 every 6 QEC rounds. Using the same factory as in Section <ref>, we find that four factories can produce a |T⟩ state on average once every 4.5 QEC rounds. These four factories require 2,088 physical qubits, and 200 physical qubits for storing |T⟩ states. Adding this to our 456 physical qubits for the logical patches and routing leads to a total of 2,744 physical qubits. The extra four logical qubits for storing |T⟩ states increase the probability of an error on a data qubit to 2.3 × 10^-3, which means the total error probability is 4.1 × 10^-3. § CONCLUSION As we enter the era of early fault-tolerant quantum computers, where quantum error correction is able to suppress errors on a logical qubit and basic logical gates are demonstrable, it is essential for us to understand the progress required for large-scale fault-tolerant quantum algorithms. Understanding the requirements of small applications is an important step in the process. In this work, we have analysed a minimal application: estimating the ground-state energy of the hydrogen molecule. We have used several techniques to reduce the estimated resources to approximately 900 physical qubits and 15,000 QEC rounds through implementing Clifford operations directly, and approximately 2,700 physical qubits and 2,331 QEC rounds when implementing general Pauli π/4 rotations. It is worth emphasising that even for this small application, the numbers of physical qubits and gates required is several orders of magnitude larger than what has been performed experimentally so far. There are a number of further optimisations which can be made across the quantum computing stack in the hope of reducing these estimates. At the algorithmic level, techniques such as qubitisation have been shown to produce asymptotically shorter quantum circuits <cit.>, and could potentially offer improvements even for this minimal example <cit.>. Statistical phase estimation methods can allow reduced circuit depth in exchange for performing more samples <cit.>, and are often stated as being particularly appropriate for the early fault-tolerant era for this reason. At the gate synthesis level, alternative techniques have produced circuits with a smaller T count, at the cost of additional logical qubits <cit.>. When implementing π/4 joint Pauli rotations, the number of QEC rounds can be further reduced by implementing non-commuting rotations in parallel on separate patches before using teleportation to combine them, though this comes at a cost of more physical qubits <cit.>. Finally, improvements can be made to the implementation of non-Clifford gates which are more targeted towards early fault-tolerant quantum devices, such as the use of error mitigation when implementing faulty T gates <cit.>, avoiding the need for magic state distillation factories. Algorithms such as statistical phase estimation may remain well suited even in the presence of error mitigation <cit.>. A final note is that these estimates assume that quantum computers are affected specifically by depolarising noise <cit.>. While depolarising noise is easy to mathematically model, the physical noise that affects real-world devices is more complex and cannot necessarily be captured by such a model. An important direction of future work is investigating other more realistic noise models such as leakage and deriving similar scaling formulae to that presented in Equation <ref>. § CODE AVAILABILITY The source code for generating & running the logical circuits, and estimating resources, is available on GitHub <cit.>. We thank Ophelia Crawford, Earl T. Campbell, Nicole Holzmann, Jacob M. Taylor and other Riverlane colleagues for insightful discussions, and Daniel Litinski for making his code for estimating the resource requirements of |T⟩ factories publicly available. § LOGICAL CIRCUIT FIGURES The complete logical circuits for estimating the ground-state energy of the hydrogen molecule using textbook QPE and iterative QPE are given in Figures <ref> and <ref>, respectively. In the iterative QPE circuit, the classically-controlled X gates are used to reset the measured qubits to the |0⟩ state. § IMPLEMENTATION OF T-LIKE GATES Here we explain how to implement any sequence of T/T^†, S/S^† and Z gates. This is done in three steps: first, by removing inverse gates by noting that S^† = ZS and T^† = ZST; second, by combining the different phase gates into a sequence consisting of at most one Z gate, one S gate, and one T gate; and third, by absorbing the non-T gates into the conditional operations in Figure <ref>. In Figure <ref>, we give an example of implementing a T gate followed by an S gate using this procedure: we combine the S gate following the T gate with the conditional S gate, to create a gate where either a Z operation is applied at no extra cost, or an S gate is applied, depending on the joint Z ⊗ Z measurement outcome. Similar circuits can also be generated for TZ and TSZ gate sequences, as shown in Figures <ref> and <ref>, respectively. Note that the Z operation on the auxiliary qubit in Figures <ref> and <ref> can be implemented at no extra cost by simply inverting the result of the X basis measurement. § TRANSLATION RULES FOR PAULI ROTATIONS Here we explain the rules for manipulating Pauli rotations used in Section <ref>. In general, a π/2 rotation in some Pauli basis P can be moved past a π/4 operation in some Pauli basis P' without modification to either basis if P and P' commute, or by modifying P' to iPP' if they do not commute. These rules are presented graphically in Figure <ref> [Note that our rules for moving Pauli rotations past measurements are slightly different from those presented in Ref. <cit.>. This is because unlike the circuits in Ref. <cit.> where there is a single layer of measurement gates at the end of the circuit, the iterative QPE circuit in Figure <ref> features mid-circuit measurements and therefore other computations happen after a measurement gate.]. These rules can also be used to move π rotations as well, by noting that R_P(π) = R_P(π/2) · R_P(π/2). It is important however to note that these remaining non-Clifford operations can be more complicated than the single-qubit non-Clifford operations in the original logical circuit. An example of this is shown in Figure <ref>, where a π/2 Z ⊗ X rotation, such as the one that shows up in the CNOT circuit of Figure <ref>, is moved past a π/4 Z rotation on the second qubit. Noting that Z ⊗ X does not commute with I ⊗ Z, the basis for the π/4 rotation becomes i (Z · I) ⊗ (X · Z) = i Z ⊗ (-i Y) = Z ⊗ Y, thus what was originally a π/4 rotation across a single qubit has now become a π/4 rotation across multiple qubits. § ARRANGEMENT OF |T⟩ STATE FACTORIES In Figure <ref> we show how to arrange magic state factories around the logical patches and routing space for both implementing Clifford gates directly and moving Clifford gates through the circuit. In both cases, it is easily possible to arrange up to four factories around the logical patches. It is also possible to arrange even more factories, however this might come at the cost of additional routing space. For simplicity we stick with up to four factories. Note that in Figure <ref> there is some unused space at the top of the arrangement. This space has been left empty for ease of symmetry with the arrangement, but could be used as additional factory space. Such unused qubits are not included as part of the resource estimates presented in Section <ref>. We also stress that the space in yellow is not the full space required for the factories, but simply an indicator of what space the factories can be placed in. The green lines denoting the boundaries between the factories and routing space should be seen as extending beyond the limits of Figure <ref>, to however much space is required for individual factories.
http://arxiv.org/abs/2307.02207v1
20230705111410
Spin-1 Thermal Targets for Dark Matter Searches at Beam Dump and Fixed Target Experiments
[ "Riccardo Catena", "Taylor R. Gray" ]
hep-ph
[ "hep-ph", "astro-ph.CO" ]
=1 ./plots/ capbtabboxtable[][] L>l<C>c<a]Riccardo Catenaa]and Taylor R. Gray[a]Chalmers University of Technology, Department of Physics, SE-412 96 Göteborg, [email protected]@chalmers.seThe current framework for dark matter (DM) searches at beam dump and fixed target experiments primarily relies on four benchmark models, the so-called complex scalar, inelastic scalar, pseudo-Dirac and finally, Majorana DM models. While this approach has so far been successful in the interpretation of the available data, it a priori excludes the possibility that DM is made of spin-1 particles – a restriction which is neither theoretically nor experimentally justified. In this work we extend the current landscape of sub-GeV DM models to a set of models for spin-1 DM, including a family of simplified models (involving one DM candidate and one mediator – the dark photon) and an ultraviolet complete model based on a non-abelian gauge group where DM is a spin-1 Strongly Interacting Massive Particle (SIMP). For each of these models, we calculate the DM relic density, the expected number of signal events at beam dump experiments such as LSND and MiniBooNE, the rate of energy injection in the early universe thermal bath and in the Intergalactic Medium (IGM), as well as the helicity amplitudes for forward processes subject to the unitary bound. We then compare these predictions with experimental results from Planck, CMB surveys, IGM temperature observations, LSND, MiniBooNE, NA64, and BaBar and with available projections from LDMX and Belle II. Through this comparison, we identify the regions in the parameter space of the models considered in this work where DM is simultaneously thermally produced, compatible with present observations, and within reach at Belle II and, in particular, at LDMX. We find that the simplified models considered here are strongly constrained by current beam dump experiments and the unitarity bound, and will thus be conclusively probed (i.e. discovered or ruled out) in the first stages of LDMX data taking. We also find that the vector SIMP model explored in this work predicts the observed DM relic abundance, is compatible with current observations and within reach at LDMX in a wide region of the parameter space of the theory. Spin-1 Thermal Targets for Dark Matter Searches at Beam Dump and Fixed Target Experiments [ August 1, 2023 ========================================================================================== § INTRODUCTION The lack of discovery of Weakly Interacting Massive Particles (WIMPs) at dark matter (DM) direct detection experiments has motivated the exploration of a variety of alternative theoretical and experimental paradigms over the past decade <cit.>. In this exploration, emphasis has been placed on probing DM candidates lying outside the canonical WIMP mass window, with most of these efforts focusing on the MeV – GeV mass range <cit.>. This choice is supported by at least three reasons <cit.>. First, a DM candidate lighter than a nucleon would not carry enough kinetic energy to induce an observable nuclear recoil in a direct detection experiment, thereby explaining in a simple and economical way the lack of discovery of WIMPs. Second, the present cosmological density of particles in this mass range can match the one observed for DM by the Planck satellite <cit.>. This can occur via the chemical decoupling mechanism if the new sub-GeV states have interactions that involve new particle mediators in the same mass range, thus evading the Lee-Weinberg bound <cit.>. Finally, the sub-GeV DM hypothesis can be tested experimentally using existing methods, including direct detection experiments sensitive to DM-induced electronic excitations in materials, as well as beam dump and fixed target experiments. Especially important for this work are the operating beam dump experiments LSND <cit.> and MiniBooNE <cit.>, and the fixed target experiment LDMX <cit.>, as they can effectively probe sub-GeV DM models with DM-electron or -nucleon scattering cross sections that are suppressed by small DM velocities or momentum transfers. The theoretical framework currently used in the analysis of operating beam dump experiments, as well as in assessing the prospects of next-generation beam dump and fixed target experiments consists of four benchmark models <cit.>, often referred to as complex scalar DM, scalar inelastic DM, pseudo-Dirac DM, and, finally, Majorana DM. While reviewing these four models goes beyond the scope of this introduction, it is apparent that this theoretical framework a priori excludes the possibility that DM consists of spin-1 particles. However, as pointed out by different groups in a series of recent works focusing on the direct detection of vector DM <cit.>, there is no theoretical or experimental argument supporting this restriction. The main purpose of this work is to extend the current framework for DM searches at beam dump and fixed target experiments to the case of spin-1 DM. As a first step towards this extension, we study the phenomenology of a set of simplified models for vector DM featuring a kinetic mixing between an ordinary and a “dark photon” <cit.>. The latter is responsible for mediating the interactions between DM and the known electrically charged particles. These simplified models conceptually extend the Standard Model (SM) of particle physics in the same minimal way as the four benchmark models listed above. Next, we will focus on the phenomenology of an ultraviolet complete model where DM is made of Strongly Interacting Massive Particles (SIMPs) <cit.>. In both cases, we identify the regions in the parameter space of the theory where DM can be thermally produced, is not excluded by current experiments and, finally, is within reach at LDMX <cit.>. One usually refers to these regions as thermal targets. We find that the simplified models for spin-1 DM that we consider here are subject to strong constraints from existing beam dump experiments, as well as from the unitarity of the S-matrix. We also find that the regions in the parameter space of these models that are not already ruled out by existing theoretical and experimental constraints will soon be probed at LDMX, which will conclusively discover or exclude this family of models in the early stage of data taking. In contrast, in the case of vector SIMP DM, we find that the observed DM cosmological density can be reproduced in a broader region of parameter space. A significant fraction of this is not excluded by existing beam dump experiments, and is within reach at LDMX. This article is organised as follows. We start by introducing the spin-1 DM models explored in our work in Sec. <ref>. We then review the experimental and theoretical constraints these models have to fulfill in Sec. <ref>. The main results of our analysis are reported in Sec. <ref>, where we identify the regions of the parameter space of our models in which DM is not ruled out by current experiments, is thermally produced, and is within reach at LDMX. Finally, we summarise and conclude in Sec. <ref>. Useful scattering cross section formulae are listed in the appendices. § MODELS FOR VECTOR DARK MATTER In this section, we provide a brief review of the spin-1 DM models that we consider in this work. We focus on so-called “simplified models” featuring one DM candidate and a single new particle mediator in the mass spectrum, as well as on a renormalisable, ultraviolet complete model where the DM candidate is a Strongly Interacting Massive Particle (SIMP). The first framework enables us to extend the study of DM at fixed target and beam dump experiments in a way that can directly be compared with the existing literature on scalar and fermionic DM. As we will see, this first approach is constrained by existing data and subject to strong bounds from the unitarity of the S-matrix, and will therefore be conclusively probed in the early stages of LDMX data taking. The second framework is by construction compatible with the unitarity of the S-matrix, and it complements our simplified model analysis by focusing on a different mechanism to explain the present DM cosmological density. Specifically, the relic abundance of SIMPs is set by so-called 3 → 2 processes and 2 → 2 forbidden annihilations, whereas in the simplified models we consider here the relic abundance is set by pair annihilation into visible particles. §.§ Simplified models We start our exploration of spin-1 DM by considering a general set of simplified models <cit.>. These models extend the Standard Model (SM) of particle physics by a complex vector field, X^μ, playing the role of DM, and a mediator particle described by the real vector field A'^μ. The following Lagrangian specifies the interactions between the complex and real vector fields X^μ and A'^μ, respectively, and the Dirac spinors f associated with the electrically charged SM fermions, ℒ = -[i b_5 X_ν^†∂_μ X^ν A'^μ + b_6 X_μ^†∂^μ X_ν A'^ν + h.c. ] -[b_7 ϵ_μνρσ( X^†μ∂^ν X^ρ) A'^σ + h.c. ] -h_3 A'_μf̅γ^μ f , where, as anticipated, f includes electrons, muons, taus, and quarks (neutrinos are not included). The first line in Eq. (<ref>) describes interactions that can be generated in models for non-abelian spin-1 DM, as reviewed in Sec. <ref> and shown in detail in <cit.>. The strength of these interactions is parametrized by the coupling constants b_5 and b_6. Without loss of generality b_5 can be taken to be real, while the coupling constant b_6 is in general complex. The second line in Eq. (<ref>) describes interactions that can arise from abelian spin-1 DM models <cit.>, and is characterised by the in general complex coupling constant b_7. Finally, the last line in Eq. (<ref>) corresponds to the coupling between the electrically charged SM fermions and a “dark photon”, here associated with the vector field A'^μ. In order to make the analogy with the dark photon model explicit, one could identify h_3 with h_3 = eϵ, where, ε is the so-called kinetic mixing parameter, which enters the Lagrangian of the dark photon model via the term -(ε/2) F_μν F'^μν, F_μν and F'^μν being the field strength tensors of the ordinary and dark photon, respectively. In our numerical applications, we consider the following cases in which only one b-coupling at a time and h_3 are non-zero: (h_3, b_5)≠ 0, (h_3,[b_6])≠ 0, (h_3,[b_6])≠ 0, (h_3,[b_7])≠ 0, and (h_3,[b_7])≠ 0. To further facilitate the comparison between our results and the existing literature on the dark photon model, we later write the non-zero DM couplings in terms of α_D = g_D^2/(4π), where g_D is one of b_5, [b_6], [b_6], [b_7] or [b_7]. As far as the DM and mediator particle mass are concerned, we denote them by m_X and m_A', respectively. In the simplified models of this section, they are free, independent parameters. §.§ Non-abelian SIMPs As a second framework for vector DM, we consider a model where the DM candidate is a Strongly Interacting Massive Particle, or SIMP <cit.>. In this model, the SM gauge group is extended by a local SU_X(2)× U(1)_Z' symmetry group under which none of the SM particles is charged. The corresponding gauge couplings are g_X and g_Z'. The model also features an extended Higgs sector including a scalar singlet, S, and a second scalar H_X transforming non-trivially under SU_X(2)× U(1)_Z'. The gauge bosons associated with the new symmetry group are denoted by X_i,μ, i=1,2,3 and Z'_μ, or, equivalently, by X_μ≡ (X_1,μ+ i X_2,μ)/√(2), X^†_μ≡ (X_1,μ - i X_2,μ)/√(2), X_3,μ and Z'_μ, respectively. The SU(2)× U(1)_Z' gauge group is spontaneously broken by the vacuum expectation values of S, i.e. v_S, and H_X, i.e. v_X. This generates one complex and two real mass eigenstates, corresponding to X_μ and two linear combinations of Z'_μ and X_3,μ, denoted here by Z̃'_μ and X̃_3,μ, respectively. Their masses are <cit.>, m^2_X = 1/2 g_X^2 I v_X^2 , m^2_Z̃' =g^2_X I^2 v_X^2 (1-θ'_X g_Z'/g_X) , m^2_X̃_3 = g^2_X I^2 v_X^2 (1+tanθ'_X g_Z'/g_X) , where tan(2θ'_X)= 2c_X s_X/c^2_X-α s^2_X , with s_X=g_Z'/√(g_X^2+g_Z'^2), c_X=g_X/√(g_X^2+g_Z'^2) and, finally α≡ 1+q^2_S v^2_S/(I^2 v^2_X). Here q_S is the U_Z'(1) charge of S, while I labels the representation of SU_X(2) under which H_X transforms, e.g. I=1/2 for a doublet, I=1 for a triplet, and I=3/2 for a quadruplet. The interaction Lagrangian containing the cubic self-interactions between these mass eigenstates is <cit.>, ℒ_3 = -i g_Xcosθ'_X [ (∂^μ X^ν -∂^ν X^μ) X^†_μX̃_3,ν - (∂^μ X^ν† -∂^ν X^μ†) X_μX̃_3,ν + X_μ X^†_ν(∂^μX̃^ν_3 -∂^νX̃^μ_3) ] -i g_Xsinθ'_X [ (∂^μ X^ν -∂^ν X^μ) X^†_μZ̃'_ν - (∂^μ X^ν† -∂^ν X^μ†) X_μZ̃'_ν + X_μ X^†_ν(∂^μZ̃^'ν -∂^νZ̃^'μ)]. The model also predicts quartic interactions between mass eigenstates. These are relevant in relic density calculations, and have explicitly been calculated in <cit.>. Finally, “neutral current” interactions between Z̃^'_μ, X̃_3 μ and the charged SM fermions arise from a kinetic mixing term added to the Lagrangian of this SU_X(2)× U(1)_Z' gauge model. Specifically, they are given by <cit.> ℒ_ mix = -e εcos (θ_X') Z̃^'_μ f̅γ^μ f + e εsin (θ_X') X̃_3 μ f̅γ^μ f . Eq. (<ref>) shows that for tanθ'_X<0 and g_Z'/g_X |tanθ'_X|<1/2, the predicted mass hierarchy is m_X^2< m^2_X̃_3 <m_Z̃'^2 . For example, for sin (2 θ_X') = -0.1 and α_D≡ g_X^2/(4 π)=0.5, Eq. (<ref>) is always satisfied for perturbative values of g_Z'. Consequently, when the mass hierarchy in Eq. (<ref>) is realised, X_μ is a stable DM candidate, while X̃_3 and Z̃' mediate the interactions between X_μ and the SM fermions via Eq. (<ref>). Finally, let us note that for cos (θ'_X)=1, Eq. (<ref>) reduces to the first line in Eq. (<ref>) with b_5=-2 [b_6]=g_X, if one integrates by part the second line in Eq. (<ref>) and uses the equation of motion for X_μ (i.e. Proca equation), which implies ∂_μ X^μ=0. §.§ Vector DM phenomenology: general considerations In this subsection, we highlight the main differences between the models defined in Sec. <ref> and in Sec. <ref> focusing on the predicted DM relic abundance, kinetic equilibrium, mediator production and decay rates, and, finally, relativistic DM-electron and -nucleon scattering cross sections. §.§.§ Relic density Let us denote the DM particle associated with X_μ by X^+ and the corresponding DM antiparticle by X^-. Also, let n_X^+ be the cosmological number density of DM particles and n_X^- be the corresponding cosmological density of DM antiparticles. In the case of the CP-preserving simplified or SIMP DM models that we consider here, n_X^+=n_X^-, and the total DM number density, n_X=n_X^++n_X^-, evolves with time according to the following Boltzmann equation <cit.>, ṅ_X + 3H n_X = -1/2⟨σ v_ rel⟩_X^+X^-→ ff̅ (n_X^2 - n_X, eq^2) - 1/2⟨σ v^2_ rel⟩_X^+X^+X^-→ X^+X̃_3 (n_X^3 - n_Xn_X, eq^2) + 2 ⟨σ v_ rel⟩_X̃_3X̃_3→ X^+X^- n_X̃_3, eq^2( 1-n^2_X/n^2_X, eq) . In general, Eq. (<ref>) shows that n_X evolves from an equilibrium configuration to a constant co-moving value as a result of the DM chemical decoupling from the thermal bath, i.e. the so-called freeze-out or chemical decoupling mechanism. The first term in the right-hand-side of Eq. (<ref>) describes the time evolution of the DM number density due to DM pair annihilation into SM fermions. For ϵ∼ 10^-4 (10^-6) and m_X∼ 100 MeV (1 MeV), this term alone can account for the entire DM relic density. We will explore this DM production channel within the framework of simplified models for vector DM introduced above. The second and third line in Eq. (<ref>) describes the time evolution of the DM number density due to 3 → 2 processes and 2 → 2 forbidden annihilations that are specific to the vector SIMP model introduced above (i.e. they are zero in the simplified model framework). The DM to X̃_3 mediator mass ratio determines whether the DM relic abundance is set by 3 → 2 processes or 2 → 2 forbidden annihilations, as shown in Fig. 3 of <cit.>. We will explore the interplay of these two DM production mechanisms in the case of SIMP DM. §.§.§ Kinetic equilibrium In Eq. (<ref>) we implicitly assumed the mass hierarchy of Eq. (<ref>) for the SIMP DM model we introduced in Sec. <ref>. We also assumed that the associated mediator X̃_3 is in kinetic equilibrium with the SM thermal bath during the DM freeze-out. Interestingly, the in-equilibrium decay of X̃_3 into SM particles, combined with the effective scattering of DM with the X̃_3 mediator induced by the cubic and quartic interactions introduced above, serve as mechanisms to keep the DM particles in kinetic equilibrium during freeze-out, and thereby satisfy the strong constraints from structure formation on the DM kinetic decoupling temperature <cit.>. For a given g_X, sin(2 θ'_X), m_X and m_X̃_3, the value of ϵ required for X̃_3 and, consequently, for the DM particles to be in kinetic equilibrium at the freeze-out temperature T_f can be estimated from n_X̃_3, eq(T_f) Γ_X̃_3 > H(T_f)n_X, eq(T_f) , where H(T_f) is the Hubble rate at T_f, and the equilibrium densities for X and X̃_3 are given by n_X, eq = 45x^2/2g_*s(T)π^4 s K_2(x) , n_X̃_3, eq = 45 m_X̃_3^2x^2/4g_*s(T)π^4 m_X^2 s K_2(m_X̃_3x/m_X) , where s is the entropy density, g_*s(T) is the effective number of entropic relativistic degrees of freedom at the temperature T, K_2 is a modified Bessel function of the second kind and x≡ m_X/T. In Eq. (<ref>), Γ_X̃_3 is the total decay rate of X̃_3, for which the expression is given in <cit.>. §.§.§ Mediator production and decay Key to our exploration of spin-1 DM is the production and subsequent decay of mediator particles in fixed target and beam dump experiments. In the case of the simplified models of Sec. <ref>, we generically expect that A' particles are produced through the kinetic mixing term in Eq. (<ref>), either via dark bremsstrahlung or via meson decays. These A's are then expected to decay into a DM particle/anti-particle pair for values of ϵ that are consistent with the observed relic density set via the first term in the right-hand side of Eq. (<ref>). In contrast, in the case of the vector SIMP DM model introduced in Sec. <ref>, both the X̃_3 and the Z̃' mediator can in principle be produced in fixed target and beam dump experiments. In our calculations we will focus on sin(2θ'_X)=-0.1, which implies that only Z̃' particles can significantly be produced via the interaction in Eq. (<ref>), and that, once produced, these Z̃' particles will dominantly decay invisibly via gauge bosons self-interactions, as long as ϵ is small compared to g_X. Consequently, for sin(2θ'_X)=-0.1, the simplified and SIMP models behave similarly from the point of view of “dark vector boson” production at fixed target and beam dump experiments. §.§.§ Scattering by electrons and nuclei Finally, we also need the (relativistic) cross sections for DM-electron and -nucleon scattering in order to compare the simplified models of Sec. <ref> and the SIMP DM model of Sec. <ref> with the experimental constraints reviewed in Sec. <ref>. This in particular applies to the analysis of DM direct detection experiments and of beam dump experiments, where DM particles produced by the decay of A' mediators (in the case of simplified models) or Z̃' mediators (in the case of SIMPs) are searched for in electron or nuclear recoil events in a downstream detector. We calculate these cross sections by implementing the models of Sec. <ref> in FeynRules <cit.> and then using CalcHEP <cit.> to generate analytic expressions for the squared modulus of the spin-averaged scattering amplitudes. We finally validate the outcome of this symbolic calculation through direct analytical calculations of a subset of selected cross sections. In appendices <ref> and <ref>, we list the relativistic DM-electron and -nucleon scattering cross sections that we find for the simplified models of Sec. <ref>. In appendix <ref>, we report the scattering cross sections that we obtain for the SIMP DM model of Sec. <ref> as explained above. In the calculation of DM-nucleon scattering cross sections, we use the mediator-nucleon interaction Lagrangian, ℒ_N =eϵ F_1ψγ^μψ A'_μ + eϵF_2/2 m_N[ ψσ^μν(∂_νψ) + (∂_νψ) σ^μνψ]A'_μ , in the case of simplified models. For the case of SIMP DM, we use Eq. (<ref>) but with F_1 (F_2) →cos(θ'_X) F_1 (F_2) and A'→Z̃'̃. Here, F_1 and F_2 are nuclear form factors, and we list them in appendix <ref>. § EXPERIMENTAL AND THEORETICAL CONSTRAINTS In this section, we introduce a selection of constraints and projections from, respectively, operating and future DM search experiments that apply to the models of Sec. <ref>. By complementing these constraints and projections with bounds from the unitarity of the S-matrix, in Sec. <ref> we will identify the regions of the parameters space of the vector DM models we consider in this work where DM is simultaneously: 1) thermally produced, 2) experimentally allowed and, finally 3) detectable. We refer to these regions in parameter space as thermal targets. §.§ Relic density Accurate measurements of the Cosmic Microwave Background (CMB) angular power spectrum by the Planck collaboration set strong constraints on the present DM cosmological density <cit.>. The spin-1 DM models considered in this work, introduced in Sec. <ref>, are capable of producing the observed DM relic abundance consistent with Planck by the freeze-out mechanism. In the case of simplified models (Sec. <ref>), the freeze-out of DM pair annihilations into visible SM particles sets the DM relic density, whereas in the case of vector SIMP DM (Sec. <ref>) the present DM cosmological density arises from the freeze-out of 3→ 2 and forbidden annihilations, as one can see from Eq. (<ref>). Below, we discuss the two scenarios separately. In the case of simplified models, we are interested in the region of parameter space where m_A'>2m_X. In this region, the relic abundance is set dominantly by direct DM annihilation into SM fermions through an s-channel. Comparing our theoretical predictions based on Eq. (<ref>) with CMB data, we calculate the thermally averaged cross sections for direct annihilation using the MicrOMEGAS software <cit.>. We then compute the DM relic density by using our own Boltzmann solver, which relies on the freeze-out approximation from <cit.>. Our software results agree with MicrOMEGAS, although we include the contributions from DM annihilation into hadronic final states <cit.>. Fig. <ref> shows the contour lines consistent with the Planck DM abundance, or relic targets, for each of the simplified models introduced in Sec. <ref> in addition to three benchmark models from <cit.> (and mentioned in the introduction, namely complex scalar, pseudo-Dirac, and Mjorana DM). Each of the simplified models for spin-1 DM in Fig. <ref> is defined as having all couplings in Eq. (<ref>) set to 0 except for h_3 and the b-coupling specified in the legends. There are kinks in the curves occurring at m_X ≈ m_μ, where DM annihilation into muons become kinematically accessible. The resonance features appear because of resonances in the cross sections for DM annihilation into hadrons. In the case of vector SIMP DM, we rely on previus results from <cit.>. For each given α_D=g_X^2/(4π) (g_X is one of the gauge couplings introduced in Sec. (<ref>)) and sin(2θ'_X) we extract the value of the m_X̃_3/m_X ratio that gives the correct DM relic density from Fig. 3 in <cit.>. While for α_D=0.1 the correct relic abundance can be obtained for DM masses above approximately 10 MeV, for α_D=0.5 vector SIMP DM can account for the whole cosmological abundance of DM only for masses above about 80 MeV. This effect is illustrated in Fig. <ref>, which shows the DM relic abundance as a function of the DM mass and of the m_X̃_3/m_X ratio, focusing on the case in which 3→ 2 are the dominant production mechanism. As one can see from this figure, for m_X̃_3/m_X=2 the DM relic abundance drops to zero, independently of the DM particle mass. This is due to the fact that for m_X̃_3/m_X>2, the 3→ 2 process is kinematically not allowed. §.§ Direct Detection In principle, DM direct searches via electronic transitions in detector materials located deep underground also place important constraints on sub-GeV DM models, e.g. <cit.>. For spin-1 DM, the current most competitive constraints are from Xenon1T and Xenon10, which only appear in our plots for the b_5 and [b_7] models. We take the 90% C.L. exclusion limits from the work of <cit.>. §.§ Energy Injection DM annihilations in the early universe can inject energetic particles into the photon-baryon plasma or into the intergalactic medium (IGM), altering the CMB or the IGM temperature, respectively. However, CMB limits are only competitive for s-wave annihilating DM <cit.>. Among the models introduced in Sec. <ref>, only the model with [b_7]≠ 0 gives rise to s-wave dominant annihilation cross sections, thus we include 95% C.L. exclusion bounds from the CMB for this model. For all remaining spin-1 models in Sec. <ref>, including the SIMP DM model, the predicted annihilation cross section is p-wave. CMB limits on p-wave annihilation cross sections have been computed here <cit.>. We find that they are weaker than the IGM limits on the spin-1 models we consider in this work. Consequently, we set 95% C.L. exclusion limits from measurements of the IGM temperature extracted from Lyman-α observations on our p-wave annihilating DM models. Specifically, we require that the predicted thermally averaged annihilation cross section is smaller than the upper bounds reported in <cit.> as a function of the DM mass. In this analysis, we calculate the relevant annihilation cross sections analytically, and then compare them with the output of MicrOMEGAS. These IGM upper limits only appear in the top left corner of our parameter space for certain mass ratios (m_A'/m_X) and models. §.§ Beam Dumps and Fixed Target Experiments A beam of protons or electrons incident on a fixed target creates cascades of interactions at beam dump experiments. The goal is to produce DM in these cascades, which can then be detected in a downstream detector. Calculating the experimental reach of these experiments involves modeling the processes that give rise to DM production and detection, which is done through Monte Carlo (MC) simulations. These simulations include the details of the detector, the beam, the interactions which produce DM, and the DM model. With this objective, we use a modified version of the numerical tool BdNMC, a beam dump Monte Carlo software package <cit.>, in which we implement our spin-1 DM models to simulate DM production and interactions. BdNMC has the benefit of providing a simple and rigorous framework for simulating the relevant experiments Mini-Boone and LSND. In the following section, 90% C.L. limits on the model parameter space calculated from these simulations are presented, showing the reach of current experiments including beam dump and missing energy/momentum experiments which give rise to the most competitive constraints on spin-1 DM. Below, each experiment considered is introduced. §.§.§ LSND and MiniBooNE At proton beam dump experiments LSND <cit.> and MiniBooNE <cit.>, a proton beam is incident on a target and the following chain of interactions occur producing DM: p p → X π^0 ; π^0 →γ A'; A' → DM DM, where in the inclusive process, p p → X π^0, X denotes an unspecified/unmeasured set of particles, whereas π^0 can also be η in the case of MiniBooNE where the energy is sufficient. In addition, dark proton bremsstrahlung also produces DM in the case of MiniBooNE. The produced DM can then be detected in the downstream detector through DM-electron and for MiniBooNE also DM-nucleon scattering. We compute the expected number of signal events using a modified version of the software BdNMC <cit.>, a MC simulation tool for beam dumps. We added to BdNMC the model dependent DM-electron and -nucleon scattering cross section reported in the App. <ref> and <ref> as well as the relevant branching ratios for dark photon decay. At LSND, 55 non standard events were observed at 90% confidence level. A factor of 2 is included to account for the uncertainty in the pion production rate, so we take the 90% C.L. at 110 events. We perform MC simulations of DM events at LSND, taking into account our model dependent cross sections, and we draw the contours in y vs m_X space that corresponds to 110 signal events. In this work, we adopt the standard notation y=ϵ^2 α_D (m_X/m_A')^4. Similarly, we perform MC simulations of MiniBooNE to calculate the expected number of signal events. No events were observed at MiniBooNE, thus we take the contour in y vs m_X space at 2.3 events which corresponds to a 90% C.L. exclusion limit. §.§.§ E137 Limits from the electron beam dump experiment E137 are not competitive with MiniBooNE and LSND, thus we do not include them in our analysis. §.§.§ NA64 The missing energy experiment NA64 <cit.>, where an electron beam is incident on a target, aims to produce a DM flux from dark bremsstrahlung and detect these signals by their missing energy. No signal events have been observed at this experiment. We report 90% confidence limits extracted from <cit.>, which we project onto the y vs m_X plane. §.§.§ LDMX LDMX is a future missing momentum experiment <cit.> with an 8 GeV electron beam incident on a tungsten target with 10^16 EOT. The ultimate reach at 90% C.L. is calculated using the expected number of DM signal events from simulations and we project this expected limit onto our parameter space. In addition, we include the projected 90 % C.L. exclusion limits from the analysis of <cit.>, where bremsstrahlung photons are converted into vector mesons and then decay to invisible states, giving an extended sensitivity reach in the larger DM mass regime. §.§ Monophoton searches We also include searches for single photon events at e^+ e^- colliders, where DM is produced through the process e^+ e^- →γ A', A' → DM DM. §.§.§ BaBar The BaBar detector at the PEP-II B-factory searches for a narrow peak in the missing mass distribution in the events with one high energy photon. BaBar has observed no signal events, and we take the 90% C.L. limits on ϵ vs m_X from <cit.> and project them onto our parameter space. §.§.§ Belle-II For the future experiment Belle-II, we use the expected 90% C.L. limits on ϵ vs m_X for phase 3 of Belle-II reported in <cit.>, to obtain the expected exclusion limits on our spin-1 DM model. §.§ Unitarity bound In general, simplified DM models can violate perturbative unitarity in some regions of parameter space <cit.>. Due to the energy dependence introduced in the cross section by the longitudinal component of the vector DM polarization vectors, as well as from the underlying derivative couplings, the simplified models for spin-1 DM introduced in Sec. <ref> predict large and un-physical scattering amplitudes. Therefore, their parameter space is subject to a “unitarity bound”, or in other words a bound of theoretical validity. Violations of this bound indicate either that the theory is non perturbative (all terms in the perturbative expansion of the S-matrix are equally important), or that it is not complete, and thus additional fields have to be included to cancel out energy dependent, un-physical contributions to scattering cross sections. Unitarity violation from DM self-scattering was investigated in <cit.>, while here we calculate the DM-e^- scattering amplitude to determine at which parameters unitarity is violated. Following <cit.>, we implement the unitary bound by requiring that in the (y,m_X) plane, |(M_i→ i^J) |≤ 1 , 2|(M_i→ i^J) |≤ 1 , where i→ i = X e^- → X e^-, J=0, and ℳ_X e^- → X e^-^0 (s) = β/32 π∫^1_-1 dcosθℳ_X e^- → X e^-(s,cosθ) . § THERMAL TARGETS IDENTIFICATION We now present our spin-1 thermal targets for DM searches at beam dump and fixed target experiments. As anticipated, in these regions of parameter space DM is simultaneously thermally produced, not excluded by existing experimental results, and within reach at Belle II or LDMX. We discuss the simplified models of Sec. <ref> and the vector SIMP model of Sec. <ref> separately. §.§ Simplified Models Figs. <ref>, <ref>, and <ref> summarize the constraints and projections on the (m_X, y) plane that we obtain as explained in Sec. <ref> for the simplified models introduced in Sec. <ref>. Here, y=ϵ^2 α_D (m_X/m_A')^4 and the black curves on these figures show the contours that are consistent with the observed abundance of DM measured by Planck <cit.>. The scientific potential of the upcoming experiment LDMX, the area above the red dashed curves of the figures, is significant for sub-GeV DM, since it is projected to probe down to much smaller couplings than previous experiments. Specifically, the left panels in Fig. <ref> show current and expected exclusion limits on the spin-1 model with b_5≠ 0 for both m_A'/m_X = 3 and m_A'/m_X = 2.5. The excluded areas in orange, green, blue and purple correspond to the MiniBooNE, LSND, NA64, and BaBar experiments, respectively. For comparison, we also include the exclusion limits on the y coupling obtained in <cit.> from the null result reported by the Xenon10 and Xenon1T experiments. Finally, the expected exclusion limits for LDMX are compared with the expected reach of Belle II. Independently of the (m_A'/m_X) ratio, this model is ruled about by current beam dump experiments. The right panels in Fig. <ref> shows the same set of constraints and projections now for the familiar complex scalar DM model, which we recompute as described in Sec. <ref> to calibrate our codes, and for comparison. Indeed, the complex scalar DM model exhibits the same derivative coupling between DM and the dark photon as in the model with b_5≠ 0. The only difference between the two models is in the Feynman rules for incoming and outgoing DM particles, which in the latter case involves momentum dependent polarisation vectors. This difference produces a cross section for DM-nucleon scattering that is enhanced by a factor of (E_p⃗/m_X)^2 ≫ 1 relative to the case of the complex scalar DM model, where E_p⃗ is the relativistic energy of the incoming DM particle in the rest frame of the downstream detector. As a result, the MiniBooNE constraints on the b_5 model are much stronger than in the case of complex scalar DM. Let us now focus on Fig. <ref>. This figure shows current and expected exclusion limits on the spin-1 models with [b_6] ≠ 0 and [b_6]≠ 0 (top left and top right panels) as well as on the models with [b_7] ≠ 0 and [b_7] ≠ 0 (bottom left and bottom right panels). In all panels we assume m_A'/m_X = 3, while the colour code is the same as in Fig. <ref>. Current beam dump experiments, LSND and MiniBooNE (green and orange shaded regions respectively), and BaBar (pink shaded regions) are able to rule out large parts of the (m_X,y) plane for the [b_7], [b_6] and [b_7] models, while leaving the [b_6] ≠ 0 spin-1 model still compatible with observations in the mass range between approximately 40 MeV and 200 MeV. Remarkably, the constraints from MiniBooNE on y are much stronger in the case of the [b_6] model than for the [b_6] model. This is due to the fact that for the [b_6] model the cross section for DM-nucleon scattering is enhanced by a (E_p⃗/m_X)^2 ≫ 1 factor relative to the analogous cross section for the [b_6] model. Here E_p⃗ is the energy of the DM particle in the downstream detector rest frame. As anticipated above, a similar enhancement is also present in the case of the b_5≠ 0 spin-1 model. Finally, in the case of the [b_6] model, top left panel in Fig. <ref>, we also report the unitary bound on y arising from the helicity amplitude for DM-electron scattering, which is one of the processes directly entering the calculation of the constraints in Fig. <ref>. It should also be noticed that for the [b_7] spin-1 model, Figs. <ref> and <ref>, the DM annihilation cross section is s-wave dominant in contrast to being p-wave dominant as in the other models we consider, leading to a strong constraint from CMB measurements on this scenario. Fig. <ref> shows current and expected exclusion limits on the same models as in Fig. <ref>, now assuming the different mass ratio, m_A'/m_X = 2.5. While this leaves are conclusions qualitatively unchanged, the range of DM masses that are compatible with all observations in the case of the [b_6] model is now significantly broader, varying from about 10 MeV to 300 MeV. §.§ Non-abelian SIMPs We now turn our attention to the case of vector SIMP DM. We start by reviewing some of the features of the model, and its differences with the simplified models. In the SIMP model, the DM relic abundance is set by the freeze-out of 3→ 2 processes and forbidden annihilations, and is thus independent of the coupling y. At the same time, for the thermal production of SIMP DM to work, DM has to be in kinetic equilibrium in the early universe. As anticipated, this occurs via effective DM-X̃_3 mediator scattering processes combined with in-equilibrium X̃_3 decays into SM particles. Since the decay rate of X̃_3 depends on ϵ <cit.>, only above a certain critical value for y vector SIMP DM can effectively be thermally produced in the right amount. As described in Sec. <ref>, we calculate this lower bound by using Eq. (<ref>). The result of this calculation defines a region in the parameter space of the model above which SIMPs are in kinetic equilibrium at their chemical decoupling, and the freeze-out mechanics can in principle successfully predict the present DM cosmological density. Whether or not the whole cosmological DM abundance is in the form of vector SIMPs depends on the choice of α_D. For example, for α_D=0.1 vector SIMPs can constitute the entire DM in our universe for masses above about 10 MeV, whereas for α_D=0.5 this is only possible for masses above about 80 MeV. Here and in the figures below, we assume the benchmark value for the mixing angle sin(2 θ'_X)=-0.1. Keeping these general considerations on SIMP DM in mind, we are now ready to compare the predictions of the vector SIMP DM model with the experimental and theoretical constraints of Sec. <ref>. Fig. <ref> shows the current and expected exclusion limits on the y coupling as a function of the DM mass that we obtain from a reanalysis of data collected at NA64, MiniBooNE, LSND, and BaBar, as well as from projections for LDMX and Belle II. We obtain these exclusion limits by comparing theory and observations as described in Sec. <ref>, while the colour code in the figure is the one of the figures in the previous sections. Remarkably, in the case of SIMP DM the strongest bound on y arises from NA64, rather than from MiniBooNE or LSND. This is due to the fact that the NA64 bound on y is model-independent, while the cross section for DM-nucleon and -electron scattering for SIMP DM is suppressed by cancellations between contributions from the m_Z̃' and m_X̃'_3 mediators. In Fig. <ref>, we assume α_D=0.5, m_A'/m_X=3, and m_X̃_3/m_X≃2. Fig. <ref> reports the results of an analogous analysis where we assume a different combination of parameters, namely α_D=0.1, m_A'/m_X=3, and m_X̃_3/m_X≃ 2. The main difference between the analyses is the range of masses for which the whole cosmological DM abundance is in the form of vector SIMPs. As explained above, this difference is due to the different assumptions made for α_D in the two figures. § CONCLUSION In this analysis, we extended the current landscape of sub-GeV DM models considered in the context of various experiments such as MiniBooNE, LSND and LDMX to a set of models for spin-1 DM, including a general family of simplified models (involving one DM particle and one mediator – the dark photon) and an ultraviolet complete model based on a non-abelian gauge group (now including two mediators and an extended Higgs sector) where DM is a vector SIMP. For each of these models, we calculated the DM relic density, the expected number of signal events in beam dump experiments such as LSND and MiniBooNE, the rate of energy injection in the early universe thermal bath and in the IGM, as well as the helicity amplitudes for forward processes subject to the unitary bound. We then compared these predictions with a number of different experimental results from Planck, CMB observations, direct detection experiments (Xenon10 and Xenon1T), data on the IGM temperature from Lyman alpha observations, LSND, MiniBooNE, NA64, and BaBar and with available projections from LDMX and Belle II. Through this comparison, we identified the regions in the parameter space of the models considered in this work where DM is simultaneously thermally produced, compatible with present observations, and within reach at Belle II and, in particular, at LDMX. We found that the simplified models for spin-1 DM investigated in our analysis are strongly constrained by LSND and MiniBooNE, as well as from bounds on the unitarity of the S-matrix. The only model not already excluded by these experimental and theoretical constraints is the one characterised by the coupling constants h_3 and [b_6]. For a dark photon to DM mass ratio of 3 (2.5), this model is compatible with current observations, within reach at LDMX and admits a thermal DM candidate for DM masses in a window between about 40 (10) MeV and 200 (300) MeV. In this mass range, the model has a relic density contour lying very close to current 90% C.L. exclusion limits from beam dump experiments in the (m_X, y) plane, and will thus be conclusively probed (i.e. excluded or discovered) in the first LDMX run. At the same time, we found that the vector SIMP model explored in this work admits thermal DM candidates that are not ruled out by beam dump experiments and within reach at LDMX in a wide region of the underlying parameter space. The model features a DM production mechanism that is complementary to the freeze-out of DM pair annihilations into SM particles of simplified models, and is based on the interplay of 3 → 2 processes and forbidden annihilations. It also exhibits a lower bound on the DM particle mass arising from the relic density constraint. The larger α_D, the larger the minimum admissible DM particle mass. Ultimately, our investigation bridges a gap in the current knowledge of sub-GeV DM by providing the DM community with new sub-GeV spin-1 thermal targets lying in the experimentally accessible region of next-generation beam dump and fixed target experiments such as LDMX. We would like to thank Felix Kahlhoefer and Chris Chang for pointing out the importance of the unitarity bound on simplified models for spin-1 DM. We would also like to thank Avik Banerjee and Gabriele Ferretti for drawing our attention to the work by Choi et al. <cit.> on vector SIMP DM. In addition, we greatly appreciate the important discussions we had with Patrick deNiverville on beam dump experiment simulations, in particular his software BdNMC. The research contained in this article was performed within the Knut and Alice Wallenberg project grant Light Dark Matter (Dnr. KAW 2019.0080). We would like to thank all participants in the project for valuable discussions on sub-GeV DM and the physics reach of LDMX during our weekly collaboration meetings. R.C. acknowledges support from individual research grants from the Swedish Research Council, Dnr. 2018-05029 and Dnr. 2022-04299. § CROSS SECTIONS FOR RELATIVISTIC DARK MATTER-NUCLEON SCATTERING In this appendix, we provide analytic expressions for the differential cross sections for relativistic DM-nucleon scattering for the spin-1 DM models of Sec. <ref>. As anticipated, we obtain these cross sections by implementing the models of Sec. <ref> in FeynRules <cit.> and then using CalcHEP <cit.> to generate analytic expressions for the squared modulus of the spin-averaged scattering amplitude. We finally validate the outcome of this symbolic calculation through direct analytical calculations of a subset of selected cross sections. In the laboratory frame, we find dσ_NX(E_p⃗, E_p⃗')/ d E_p⃗' = 1/32 π m_N (E_p⃗^2-m_X^2)|ℳ_NX(E_p⃗, E_p⃗')|^2 , where E_p⃗ (E_p⃗') is the initial (final) DM particle energy and the squared scattering amplitude is given by |ℳ_NX(E_p⃗, E_p⃗')|^2 = η^2[ F_1^2 𝒜_η(E_p⃗, E_p⃗') + F_2^2 ℬ_η(E_p⃗, E_p⃗') + F_1F_2𝒞_η(E_p⃗, E_p⃗')]/3 m_X^4 [2 m_N (E_p⃗- E_p⃗')+m_A'^2]^2 . Here, a dependence of the nucleon form factors F_1(q^2) and F_2(q^2) on the momentum transfer q=p-p' is understood. The three functions, 𝒜_η(E_p⃗, E_p⃗'), ℬ_η(E_p⃗, E_p⃗') and 𝒞_η(E_p⃗, E_p⃗') in Eq. (<ref>) are model dependent and have dimension of mass to the eighth power, so that |ℳ_NX(E_p⃗, E_p⃗')|^2 is dimensionless. η is the coupling constant characterising the underlying DM model. Below, we specify 𝒜_η(E_p⃗, E_p⃗'), ℬ_η(E_p⃗, E_p⃗') and 𝒞_η(E_p⃗, E_p⃗') for different choices of η. * For η=b_5, we find 𝒜_b_5(E_p⃗, E_p⃗') = 8 m_N [E_p⃗'(2 E_p⃗ m_N+m_X^2)-E_p⃗ m_X^2] ∑_sϵ^s_μϵ^s μ * , ℬ_b_5(E_p⃗, E_p⃗') = 2 m_N (E_p⃗-E_p⃗') [E_p⃗^2+2 (E_p⃗+m_N) E_p⃗' +E_p⃗'^2-2 E_p⃗ m_N-4 m_X^2] ×∑_sϵ^s_μϵ^s μ * , 𝒞_b_5(E_p⃗, E_p⃗') =-8 m_N (E_p⃗-E_p⃗') (-m_N E_p⃗'+E_p⃗ m_N+2 m_X^2)∑_sϵ^s_μϵ^s μ * , where the sum over spin configurations of the product of DM polarisation vectors is given by ∑_sϵ^s_μϵ^s μ * = {E_p⃗^2 m_N^2+m_N E_p⃗'[m_N E_p⃗'-2 (E_p⃗ m_N+m_X^2)]+2 E_p⃗ m_N m_X^2+3 m_X^4} . * For η=(b_6), we obtain 𝒜_(b_6)(E_p⃗, E_p⃗') = 8 m_N^2 m_X^2(E_p⃗-E_p⃗') [E_p⃗'(m_N E_p⃗'+m_N^2-m_X^2)+m_X^2 (E_p⃗-2 m_N). . +E_p⃗ m_N (E_p⃗-m_N)] , ℬ_(b_6)(E_p⃗, E_p⃗') =4 m_N m_X^2(E_p⃗-E_p⃗')^2 [E_p⃗'(2 E_p⃗ m_N-m_N^2+m_X^2). . +E_p⃗ (m_N-m_X) (m_N+m_X)+2 m_N m_X^2] , 𝒞_(b_6)(E_p⃗, E_p⃗') =16 m_N^2m_X^2 (E_p⃗-E_p⃗')^2 (-m_N E_p⃗'+E_p⃗ m_N+2 m_X^2) . * For η=(b_6), the model dependent functions in Eq. (<ref>) can explicitly be written as follows 𝒜_(b_6)(E_p⃗, E_p⃗') =8 m_N^2 (E_p⃗-E_p⃗') {E_p⃗'[2 E_p⃗^2 m_N^2-2 E_p⃗ m_N^2 E_p⃗'+m_N m_X^2 (2 E_p⃗+m_N).. .. -m_X^4]-E_p⃗ m_N^2 m_X^2+m_X^4 (E_p⃗-2 m_N)} , ℬ_(b_6)(E_p⃗, E_p⃗') =2 m_N (E_p⃗-E_p⃗')^2 {E_p⃗^2 m_N^2 (E_p⃗-2 m_N) . . +E_p⃗'[-m_N^2 E_p⃗'(E_p⃗'+E_p⃗+2 m_N)+E_p⃗ m_N^2 (E_p⃗+4 m_N) .. ..+2 m_N m_X^2 (2 E_p⃗+m_N)+2 m_X^4]-2 E_p⃗ m_N^2 m_X^2-2 m_X^4 (E_p⃗-2 m_N)} , 𝒞_(b_6)(E_p⃗, E_p⃗') =-8 m_N^2 (E_p⃗-E_p⃗')^2 [E_p⃗^2 m_N^2+m_N^2 E_p⃗'(E_p⃗'-2 E_p⃗)-4 m_X^4] . * For η=(b_7), we find 𝒜_(b_7)(E_p⃗, E_p⃗') =8 m_N m_X^2(-m_N E_p⃗'+E_p⃗ m_N+2 m_X^2) [E_p⃗'(m_N E_p⃗'+m_N^2-m_X^2) . .+m_X^2 (E_p⃗-2 m_N)+E_p⃗ m_N (E_p⃗-m_N)] , ℬ_(b_7)(E_p⃗, E_p⃗') =4 m_X^2 (E_p⃗-E_p⃗') (-m_N E_p⃗'+E_p⃗ m_N+2 m_X^2) ×[E_p⃗'(2 E_p⃗ m_N-m_N^2+m_X^2) +E_p⃗ (m_N-m_X) (m_N+m_X). .+2 m_N m_X^2] , 𝒞_(b_7)(E_p⃗, E_p⃗') = 16 m_N m_X^2(E_p⃗-E_p⃗') (-m_N E_p⃗'+E_p⃗ m_N+2 m_X^2)^2 . * Finally, for η=(b_7), we obtain 𝒜_(b_7)(E_p⃗, E_p⃗') = 8 m_N^2 m_X^2(E_p⃗-E_p⃗') [E_p⃗^2 m_N+E_p⃗'(m_N E_p⃗'+m_N^2+m_X^2) . .-E_p⃗(m_N^2+m_X^2)] , ℬ_(b_7)(E_p⃗, E_p⃗') = 4 m_N^2m_X^2 (E_p⃗-E_p⃗')^2 [(2 E_p⃗-m_N) E_p⃗'+E_p⃗ m_N-2 m_X^2] , 𝒞_(b_7)(E_p⃗, E_p⃗') = 16 m_N^2 m_X^2(E_p⃗-E_p⃗')^2 (-m_N E_p⃗'+E_p⃗ m_N-m_X^2) . § CROSS SECTIONS FOR RELATIVISTIC DARK MATTER-ELECTRON SCATTERING In the laboratory frame, the differential cross section for DM-electron scattering can be written as follows dσ_eX(E_p⃗, E_k⃗')/ d E_k⃗' = 1/32 π m_e (E_p⃗^2-m_X^2)|ℳ_eX(E_p⃗, E_k⃗')|^2 , where E_k⃗' is the final state electron energy. As for the case of DM-nucleon scattering, we evaluate Eq. (<ref>) by the combined use of FeynRules <cit.>, CalcHEP <cit.> and analytical calculations for validation. Here, we express the squared modulus of the scattering amplitude in Eq. (<ref>) as |ℳ_eX(E_p⃗, E_k⃗')|^2 = η^2 h_3^2 𝒟_η(E_p⃗,E_k⃗')/3 m_X^4 [ 2m_e(E_k⃗'-m_e)+m_A'^2]^2 . Below, we specify the model dependent function 𝒟_η(E_p⃗,E_k⃗') for different choices of coupling constant η: 𝒟_b_5(E_p⃗,E_k⃗') = 8m_e [m_e^2 (E_k⃗'-m_e)^2+2 m_e m_X^2 (E_k⃗'-m_e)+3 m_X^4] ×[2 E_p⃗ m_e (-E_k⃗'+E_p⃗+m_e)+m_X^2 (m_e-E_k⃗')] , 𝒟_(b_6)(E_p⃗,E_k⃗') = 8 m_e^2 m_X^2 (E_k⃗'-m_e) {m_e [E_k⃗'^2-E_k⃗' (2 E_p⃗+3m_e)+2 (E_p⃗^2+E_p⃗m_e+m_e^2)]. .+m_X^2 (E_k⃗'-3 m_e)} , 𝒟_(b_6)(E_p⃗,E_k⃗') = -8 m_e^2 (E_k⃗'-m_e) {-m_e m_X^2 [-E_k⃗' (2 E_p⃗+m_e)+2 E_p⃗^2+2 E_p⃗m_e+m_e^2]. .+2 E_p⃗m_e^2 (m_e-E_k⃗') (-E_k⃗'+E_p⃗+m_e)-m_X^4 (E_k⃗'-3m_e)} , 𝒟_(b_7)(E_p⃗,E_k⃗') = -8 m_e m_X^2 (-E_k⃗'m_e+m_e^2-2 m_X^2) {m_e [E_k⃗'^2-E_k⃗' (2 E_p⃗+3m_e).. ..+2 (E_p⃗^2+E_p⃗ m_e+m_e^2)]+m_X^2 (E_k⃗'-3m_e)} , 𝒟_(b_7)(E_p⃗,E_k⃗') = 8 m_e^2 m_X^2(E_k⃗'-m_e) {m_e [E_k⃗'^2-E_k⃗' (2 E_p⃗+3 m_e)+2 (E_p⃗^2+E_p⃗ m_e+m_e^2)] . .+m_X^2 (m_e-E_k⃗')} . § SCATTERING CROSS SECTIONS FOR SIMP DM For the DM-electron scattering cross section for the non-abelian SIMP model, we find the expression dσ/ d E_e = 1/192 π m_X^4sin^2(2θ'_X) e^2 ϵ ^2 g_X^2/E_p⃗^2-m_X^2{𝒜(E_e) [ E_p⃗^2 - (E_e - m_e) E_p⃗] - ℬ(E_e)}𝒞(E_e) , where we collected terms depending on different powers of E_p⃗, and introduced the two coefficients 𝒜(E_e) = 2(E_e - m_e)^2 m_e^3 + 10 (E_e - m_e) m_e^2 m_X^2 + 24 m_e m_X^4 , and ℬ(E_e) = (E_e-m_e) m_X^2 [ (E_e-m_e) m_e^3 + (3 E_e -m_e) m_e m_X^2 + 12 m_X^4 ] . The overall factor 𝒞(E_e) arises from the propagators due to Z̃' and X̃_3 exchange, and it is given by 𝒞(E_e)= (1/2 E_e m_e-2 m_e^2+m_Z̃'^2-1/2 E_e m_e-2 m_e^2+m_X̃_3^2)^2 . In the non-relativistic limit, Eq. (<ref>) reduces to Eq. (66) from <cit.> if integrated from 0 to 2 μ^2 v^2/m_e, where μ is the DM-electron reduced mass, while v is the DM-electron relative velocity. For the DM-nucleon scattering cross section for the non-abelian SIMP model, we find the expression dσ/ d E_p⃗' = 1/192 π m_X^4sin^2(2θ'_X) e^2 ϵ ^2 g_X^2/E_p⃗^2-m_X^2[ F_1^2 𝒜_1(E_p⃗, E_p⃗') + F_2^2 𝒜_2(E_p⃗, E_p⃗') + 2 F_1F_2𝒜_12(E_p⃗, E_p⃗')] ×𝒟(E_p⃗,E_p⃗')/4 , where 𝒜_1(E_p⃗, E_p⃗') =4 {2 E_p⃗ m_N^3 E_p⃗'^3+2 E_p⃗'[E_p⃗^3 m_N^3+E_p⃗ m_N^2 m_X^2 (5 E_p⃗+m_N) +m_N m_X^4 (15 E_p⃗+m_N)+6 m_X^6 ] -m_N E_p⃗'^2 [4 E_p⃗^2 m_N^2+m_N m_X^2 (10 E_p⃗+m_N)+3 m_X^4] -E_p⃗ m_X^2 [E_p⃗ m_N^3+m_N m_X^2 (3 E_p⃗+2 m_N)+12 m_X^4]} , 𝒜_2(E_p⃗, E_p⃗') =(E_p⃗-E_p⃗') {E_p⃗^3 m_N^2 (E_p⃗-2 m_N)+2 E_p⃗^2 m_N m_X^2 (2 E_p⃗-5 m_N) +E_p⃗'[6 E_p⃗^2 m_N^3 +E_p⃗'(m_N E_p⃗' (m_N E_p⃗'+2 m_N^2-4 m_X^2)-2 E_p⃗ m_N^2 (E_p⃗+3 m_N) -2 m_N m_X^2 (4 E_p⃗+5 m_N)+10 m_X^4)+4 m_X^4 (7 E_p⃗+9 m_N) +4 E_p⃗ m_N m_X^2 (2 E_p⃗+5 m_N)]+2 E_p⃗ m_X^4 (5 E_p⃗-18 m_N)-48 m_X^6} , 𝒜_12(E_p⃗, E_p⃗') =-2(E_p⃗-E_p⃗') (-m_N E_p⃗'+E_p⃗ m_N+2 m_X^2) {E_p⃗^2 m_N^2+m_N E_p⃗'[m_N E_p⃗' -2 (E_p⃗ m_N+m_X^2)]+2 E_p⃗ m_N m_X^2+12 m_X^4} , and 𝒟(E_p⃗,E_p⃗')= (1/m_Z̃'^2+2 m_N(E_p⃗ -E_p⃗')-1/m_X̃_3+2 m_N(E_p⃗ -E_p⃗'))^2 . JHEP
http://arxiv.org/abs/2307.00523v1
20230702091432
Disentangling Hype from Practicality: On Realistically Achieving Quantum Advantage
[ "Torsten Hoefler", "Thomas Haener", "Matthias Troyer" ]
quant-ph
[ "quant-ph", "cs.DS", "cs.PF", "physics.pop-ph" ]
Microsoft Corporation One Microsoft Way Redmond Washington USA 98052 ETH Zurich Universitaestsstrasse 6 Zurich Zurich Switzerland 8092 [email protected] This work was done prior to T.H. joining AWS [email protected] [email protected] Microsoft Corporation One Microsoft Way Redmond Washington USA 98052 Quantum computers offer a new paradigm of computing with the potential to vastly outperform any imagineable classical computer. This has caused a gold rush towards new quantum algorithms and hardware. In light of the growing expectations and hype surrounding quantum computing we ask the question which are the promising applications to realize quantum advantage. We argue that small data problems and quantum algorithms with super-quadratic speedups are essential to make quantum computers useful in practice. With these guidelines one can separate promising applications for quantum computing from those where classical solutions should be pursued. While most of the proposed quantum algorithms and applications do not achieve the necessary speedups to be considered practical, we already see a huge potential in material science and chemistry. We expect further applications to be developed based on our guidelines. Disentangling Hype from Practicality: On Realistically Achieving Quantum Advantage Matthias Troyer ================================================================================== plain plain Operating on fundamentally different principles than conventional computers, quantum computers promise to solve a variety of important problems that seemed forever intractable on classical computers. Leveraging the quantum foundations of nature, the time to solve certain problems on quantum computers grows more slowly with the size of the problem than on classical computers—this is called quantum speedup. Going beyond quantum supremacy <cit.>, which was the demonstration of a quantum computer outperforming a classical one for an artificial problem, an important question is finding meaningful applications (of academic or commercial interest) that can realistically be solved faster on a quantum computer than on a classical one. We call this a practical quantum advantage, or quantum practicality for short. There is a maze of hard problems that have been suggested to profit from quantum acceleration: from cryptanalysis, chemistry and materials science, to optimization, big data, machine learning, database search, drug design and protein folding, fluid dynamics and weather prediction. But which of these applications realistically offer a potential quantum advantage in practice? For this, we cannot only rely on asymptotic speedups but must consider the constants involved. Being optimistic in our outlook for quantum computers, we will identify clear guidelines for quantum practicality and use them to classify which of the many proposed applications for quantum computing show promise and which ones would require significant algorithmic improvements to become practically-relevant. To establish reliable guidelines, or lower bounds for the required speedup of a quantum computer, we err on the side of being optimistic for quantum and overly pessimistic for classical computing. Despite our overly-optimistic assumptions, our analysis will show that a wide range of often-cited applications is unlikely to result in a practical quantum advantage without significant algorithmic improvements. We compare the performance of only a single classical chip that is fabricated today similar to the one used in the NVIDIA A100 GPU which fits around 54 billion transistors <cit.> with an optimistic assumption for a hypothetical quantum computer that may be available in the next decades with 10,000 error-corrected logical qubits, 10 μ s gate time for logical operations, the ability to simultaneously perform gate operations on all qubits and all-to-all connectivity for fault tolerant two-qubit gates.[Note that no quantum error correction scheme exists today that allows simultaneous execution of gates and all-to-all connectivity without at least a O(√(N)) slowdown for N qubits.] I/O bandwidth We first consider the fundamental I/O bottleneck that limits quantum computers in their interaction with the classical world, which determines bounds for data input and output bandwidths. Scalable implementations of quantum random access memory (QRAM <cit.>) demand a fault-tolerant error corrected implementation and the bandwidth is then fundamentally limited by the number of quantum gate operations or measurements that can be performed per unit time. We assume only a single gate operation per input bit. For our optimistic future quantum computer the resulting rate is 10,000 times smaller than for an existing classical chip (see Table 1). We immediately see that any problem that is limited by accessing classical data, such as search problems in databases, will be solved faster by classical computers. Similarly, a potentially exponential quantum speedup in linear algebra problems <cit.>, vanishes when the matrix has to be loaded from classical data, or when the full solution vector should be read out. More generally, quantum computers will be practical for “big compute” problems on small data, not big data problems. Crossover scale With quantum speedup, asymptotically fewer operations will be needed on a quantum computer than on a classical computer. Due to the high operational complexity and slower gate operations, however, each operation on a quantum computer will be slower than a corresponding classical one. As sketched in Figure 1, classical computers will thus always be faster for small problems and quantum advantage is realized beyond a problem-dependent crossover scale where the gain due to quantum speedup overcomes the constant slowdown of the quantum computer. To have real practical impact, the crossover time needs to be short, not more than weeks. Constants matter in determining the utility for applications, as with any runtime estimate in computing. Compute performance To model performance, we employ the well-known work-depth model from classical parallel computing to determine upper bounds of classical silicon-based computations and an extension for quantum computations. In this model, the work is the total number of operations and applies to both classical and quantum executions. In Table 1 we provide concrete examples using three types of operations: logical operations, 16-bit floating point, and 32-bit integer or fixed-point arithmetic operations for numerical modeling. For the quantum costs, we consider only the most expensive parts in our estimates, again benefiting quantum computers: For arithmetic, we count just the dominant cost of multiplications, assuming that additions are free. Furthermore, for floating point multiplication, we consider only the cost of the multiplication of the mantissa (10 bits in fp16). We ignore all further overheads incurred by the quantum algorithm due to reversible computations, as well as the significant cost of mapping to a specific hardware architecture with limited qubit connectivity. Crossover times for classical and quantum computation To estimate lower bounds for the crossover times, we next consider that while both classical and quantum computers have to evaluate the same functions (usually called oracles) that describe a problem, quantum computers require fewer evaluations thereof due to quantum speedup. At the root of many quantum acceleration proposals lies a quadratic quantum speedup, including the well-known Grover algorithm <cit.>. For such an algorithm, a problem that needs X function calls on a quantum computer requires quadratically more, namely on the order of X^2 calls on a classical computer. To overcome the large constant performance difference between a quantum computer and a classical computer, which Table 1 shows to be more than a factor of 10^10, a large number of function calls X≫ 10^10 is needed for the quantum speedup to deliver a practical advantage. In Table 2, we estimate upper bounds for the complexity of the function that will lead to a cross-over time of 10^6 seconds, or roughly two weeks. We see that with quadratic speedup even a single floating point or integer operation leads to crossover times of several months. Furthermore, at most 68 binary logical operations can be afforded to stay within our desired crossover time of two weeks, which is too low for any non-trivial application. Keeping in mind that these estimates are pessimistic for classical computation (a single of today’s classical chips) and overly optimistic for quantum computing (only considering the multiplication of the mantissa and assuming all-to-all qubit connectivity), we come to the clear conclusion that quadratic speedups are insufficient for practical quantum advantage. The numbers look better for cubic or quartic speedups where thousands or millions of operations may be feasible, and we hence conclude, similarly to Babbush et al. <cit.>, that at least cubic or quartic speedups are required for a practical quantum advantage, and taking into account . As a result of our overly-optimistic assumptions in favor of quantum computing, these conclusions will remain valid even with significant advances in quantum technology of multiple orders of magnitude. Practical and impractical applications We can now use the above considerations to discuss several classes of applications where our fundamental bounds draw a line for quantum practicality. The most likely problems to allow for a practical quantum advantage are those with exponential quantum speedup. This includes the simulation of quantum systems for problems in chemistry, materials science, and quantum physics, as well as cryptanalysis using Shor’s algorithm <cit.>. The solution of linear systems of equations for highly structured problems <cit.> also has an exponential speedup, but the I/O limitations discussed above will limit the practicality and undo this advantage if the matrix has to be loaded from memory instead if being computed based on limited data or knowledge of the full solution is required (as opposed to just some limited information obtained by sampling the solution). Equally importantly, we identify likely dead ends in the maze of applications. A large range of problem areas with quadratic quantum speedups, such as many current machine learning training approaches, accelerating drug design and protein folding with Grover's algorithm, speeding up Monte Carlo simulations through quantum walks, as well as more traditional scientific computing simulations including the solution of many non-linear systems of equations, such as fluid dynamics in the turbulent regime, weather, and climate simulations will not achieve quantum advantage with current quantum algorithms in the foreseeable future. We also conclude that the identified I/O limits constrain the performance of quantum computing for big data problems, unstructured linear systems, and database search based on Grover’s algorithm such that a speedup is unlikely in those cases. Furthermore, Aaronson et al. <cit.> show that the achievable quantum speedup of unstructured black-box algorithms is limited to 𝒪(N^4). This implies that any algorithm achieving higher speedup must exploit structure in the problem it solves. These considerations help with separating hype from practicality in the search for quantum applications and can guide algorithmic developments. Specifically, our analysis shows that 1) it is necessary for the community to focus on super-quadratic speedups, ideally exponential speedups and 2) one needs to carefully consider I/O bottlenecks when deriving algorithms to exploit quantum computation best. Therefore, the most promising candidates for quantum practicality are small-data problems with exponential speedup. Specific examples where this is the case are quantum problems in chemistry and materials science <cit.>, which we identify as the most promising application. We recommend to use precise requirements models <cit.> to get more reliable and realistic (less optimistic) estimates in cases where our rough guidelines indicate a potential practical quantum advantage. § METHODS Here we provide more details for how we obtained the numbers above. We compare our quantum computer with a single microprocessor chip similar to the one used in the NVIDIA A100 GPU <cit.>. The A100 chip is around 850 mm^2 in size and manufactured in TSMC's 7nm N7 silicon process. A100 shows that such a chip fits around 54.2 billion transistors and can operator at a cycle time of around 0.7ns. §.§ Determining peak operation throughputs In Table 1, we provide concrete examples using three types of operations: logical operations, 16-bit floating point, and 32-bit integer arithmetic operations for numerical modeling. Other datatypes could be modeled using our methodology as well. Classical NVIDIA A100 According to its datasheet, NVIDIA’s A100 GPU, a SIMT-style von Neumann load store architecture, delivers 312 tera-operations per second (Top/s) with half precision floating point (fp16) through tensor cores and 78 Top/s through the normal processing pipeline. NVIDIA assumes a 50/50 mix of addition and multiplication operations and thus, we divide the number by two, yielding 195 Top/s fp16 performance. The datasheet states 19.5 Top/s for 32-bit integer operations, again assuming a 50/50 mix of addition and multiplication, leading to an effective 9.75 Top/s. The binary tensor core performance is listed as 4,992 Top/s with a limited set of instructions. Classical Special Purpose ASIC Our main analysis assumes that we build a special-purpose ASIC using a similar technology. If we were to fill the equivalent chip-space of an A100 with a specialized circuit, we would use existing execution units, for which the size is typically measured in gate equivalents (GE). A 16-bit floating point unit (FPU) with addition and multiplication functions requires approximately 7 kGE, a 32-bit integer unit requires 18 kGE <cit.>, and we assume 50 GE for a simple binary operation. All units include operand buffer registers and support a set of programmable instructions. We note that simple addition or multiplication circuits would be significantly cheaper. If we assume a transistor-to-gate ratio of 10 <cit.> and that 50% of the total chip area is used for control logic of a dataflow ASIC with the required buffering, we can fit 54.2B/(7k· 10· 2)=387k fp16 units. Similarly, we can fit 54.2B/(18k· 10· 2)=151k int32, or 54.2B/(50· 10· 2)=54.2M bin2 units on our hypothetical chip. Assuming a cycle time of 0.7ns, this leads to a total operation rate of 0.55 fp16, 0.22 int32, and 77.4 bin Pop/s for an application-specific ASIC with the A100's technology and budget. The ASIC thus leads to a raw speedup between roughly 2 and 15x over a programmable circuit. Thus, on classical silicon, the performance ranges roughly between 10^13 and 10^16 op/s for binary, int32, and fp16 types. Hypothetical future quantum computer To determine the costs of N-bit multiplication on a quantum computer, we choose the controlled adder from Gidney <cit.> and implement the multiplication using N single-bit controlled adders, each requiring 2N CCZ magic states. These states are produced in so called “magic state factories” that are implemented on the physical chip. While the resulting multiplier is entirely sequential, we found that this construction allows for more units to be placed on one chip than for a low-depth adder and/or for a tree-like reduction of partial products since (1) the number of CCZ states is lower (and thus fewer magic state factories are required) and (2) the number of work-qubits is lower. The resulting multiplier has a CCZ-depth and count of 2N^2 using 5N-1 qubits (2N input, 2N-1 output, N ancilla for the addition). To compute the space overhead due to CCZ factories, we first use the analysis of Gidney and Fowler <cit.> to compute the number of physical qubits per factory when aiming for circuits (programs) using ≈10^8 CCZ magic states with physical gate errors of 10^-3. We approximate the overhead in terms of logical qubits by dividing the physical space overhead by 2d^2, where we choose the error-correcting code distance d=31 to be the same as the distance used for the second level of distillation <cit.>. Thus we divide Gidney and Fowler's 147,904 physical qubits per factory (for details consult the ancillary spreadsheet (field B40) of Gidney and Fowler) by 2d^2=2· 31^2 and get an equivalent space of 77 logical qubits per factory. For the multiplier of the 10-bit mantissa of an fp16 floating point number, we need 2· 10^2=200 CCZ states and 5· 10=50 qubits. Since each factory takes 5.5 cycles <cit.> and we can pipeline the production of CCZ states, we assume 5.5 factories per multiplication unit such that multipliers don't wait for magic state production on average. Thus, each multipler requires 200 cycles and 5N+5.5· 77=50+5.5· 77=473.5 qubits. With a total of 10,000 logical qubits, we can implement 21 10-bit multipliers on our hypothetical quantum chip. With 10μ s cycle time, the 200 cycle latency, we get the final rate of less than 10^5 cycle/s / (200 cycle/op) · 21 = 10.5k op/s. For int32 (N=32), the calculation is equivalent. For binary, we assume two input and one output qubit for the (binary) adder (Toffoli gate) which does not need ancillas. The final results are summarized in Table 1. §.§ A note on parallelism We assumed massively parallel execution of the oracle on both the classical and quantum computer (i.e., oracles with a depth of one). If the oracle does not admit such parallelization, e.g., if depth = work in the worst case scenario, then the comparison becomes more favorable towards the quantum computer. One could model this scenario by allowing the classical computer to only perform one operation per cycle. With a 2 GHz clock frequency, this would mean a slowdown of about 100,000 times for fp16 on the GPU. In this extremely unrealistic algorithmic worst case, the oracle would still have to consist of only several thousands of fp16 operations with a quadratic speedup. However, we note that in practice, most oracles have low depth and parallelization across a single chip is achievable, which is what we assumed in the main text. §.§ Determining maximum operation counts per oracle call In Table 2, we list the maximum number of operations of a certain type that can be run to achieve a quantum speedup within a runtime of 10^6 seconds (a little more than two weeks). The maximum number of classical operations that can be performed with a single classical chip in 10^6 seconds would be: 0.55 fp16, 0.22 int32, and 77.4 bin Zop. Similarly, assuming the rates from Table 1, for a quantum chip: 7, 4, 2,350 Gop, respectively. We now assume that all calculations are used in oracle calls on the quantum computer and we ignore all further costs on the quantum machine. We start by modeling algorithms that provide polynomial X^k speedup, for small constants k. For example, for Grover’s algorithms <cit.>, k=2. It is clear that quantum computers are asymptotically faster (in the number of oracle queries) for any k>1. However, we are interested to find the oracle complexity (i.e., the number of operations required to evaluate it) for which a quantum computer is faster than a classical computer within the time-window of 10^6 seconds. Let the number of operations required to evaluate a single oracle call be M and let the number of required invocations be N. It takes a classical computer time T_c = N^k· M· t_c, whereas a quantum computer solves the same problem in time T_q = N· M· t_q, where t_c and t_q denote the time to evaluate an operation on a classical and on a quantum computer, respectively. By demanding that the quantum computer should solve the problem faster than the classical computer and within 10^6 seconds, we find √(t_q/t_c)≤ N≤10^6/t_q· M, which allows us to compute the maximal number of basic operations per oracle evaluation such that the quantum computer still achieves a practical speedup: M≤ 10^6·√(t_c/t^k_q). §.§ Determining I/O bandwidth We use the I/O bandwidth specified in NVIDIA's A100 datasheet for our classical chips. For the quantum computer, we assume that one quantum gate is required per bit of I/O. Using all 10,000 qubits for reading/writing, this yields an estimate of the I/O bandwidth B≈10,000/10^-5=1 Gbit/s. §.§ Acknowledgments We thank Luca Benini for helpful discussions about ASIC and processor design and related overheads and Wim van Dam and all anonymous reviewers for comments that improved an earlier draft of this work. plain
http://arxiv.org/abs/2307.02830v1
20230706075346
Generative Zero-Shot Prompt Learning for Cross-Domain Slot Filling with Inverse Prompting
[ "Xuefeng Li", "Liwen Wang", "Guanting Dong", "Keqing He", "Jinzheng Zhao", "Hao Lei", "Jiachi Liu", "Weiran Xu" ]
cs.CL
[ "cs.CL" ]
Applying Process Mining on Scientific Workflows: a Case StudyThe authors gratefully acknowledge the German Federal Ministry of Education and Research (BMBF) and the Ministry of Education and Research of North-Rhine Westphalia for supporting this work/project as part of the NHR funding. Also, we thank the Alexander von Humboldt (AvH) Stiftung for supporting our research. Zahra Sadeghibogar0000-0002-6340-9669, Alessandro Berti0000-0002-3279-4795, Marco Pegoraro0000-0002-8997-7517, Wil M.P. van der Aalst0000-0002-0955-6940 ==================================================================================================================================================================================================================================================================================================================================================================================== Zero-shot cross-domain slot filling aims to transfer knowledge from the labeled source domain to the unlabeled target domain. Existing models either encode slot descriptions and examples or design handcrafted question templates using heuristic rules, suffering from poor generalization capability or robustness. In this paper, we propose a generative zero-shot prompt learning framework for cross-domain slot filling, both improving generalization and robustness than previous work. Besides, we introduce a novel inverse prompting strategy to distinguish different slot types to avoid the multiple prediction problem, and an efficient prompt tuning strategy to boost higher performance by only training fewer prompt parameters. Experiments and analysis demonstrate the effectiveness of our proposed framework, especially huge improvements (+13.44% F1) on the unseen slots.[Our source code is available at: <https://github.com/LiXuefeng2020ai/GZPL>] § INTRODUCTION Slot filling in a task-oriented dialogue system aims to extract task-related information like hotel_name, hotel_address from user queries, which is widely applied to existing intelligent conversation applications <cit.>. Traditional supervised methods <cit.> have shown remarkable performance, but they still rely on large-scale labeled data. Lack of generalization to new domains hinder its further application to practical industrial scenarios. In this work, we focus on zero-shot cross-domain slot filling which transfers knowledge from the source domain D_S to the target domain D_T without requiring any labeled training data of D_T. Conventional approaches <cit.> formulate slot filling as a sequence labeling task and use meta-information such as slot descriptions and slot examples to capture the semantic relationship between slot types and input tokens. However, these models only learn a surface mapping of the slot types between D_S and D_T and get poor performance on unseen slots in the target domain <cit.>. Further, <cit.> propose a machine reading comprehension (MRC) framework for slot filling to enhance the semantic interaction between slot types and slot values. They firstly construct many well-designed question templates based on slot schema or slot examples, then train an MRC model <cit.> to predict corresponding slot values for a given slot type question. But they rely on handcrafted question templates using heuristic rules and pre-defined ontologies, which suffers from poor model robustness. Besides, employing additional pre-training on large-scale external MRC datasets is also time-consuming and prohibitively expensive. To solve the above issues, in this paper, we propose a Generative Zero-shot Prompt Learning (GZPL) framework for cross-domain slot filling. Instead of transforming the slot filling task into sequence labeling or MRC, we formulate it as a language generation task (see Fig <ref>). Specifically, we concat the question of each slot type, names of all slot types, the input query together to construct the input sequence and take the related slot values as output sequence. The converted text-to-text format has two benefits for zero-shot slot filling: (1) Compared to sequence labeling, our formulation enriches deep semantic interaction between slot types and slot values via pre-trained language models <cit.>, which helps recognize unseen slots only existing in the target domain. We find it significantly improves unseen slot F1 by 13.44% compared to the previous state-of-the-art (SOTA) model (see Section <ref>). The result proves the strong generalization capability to new domains of our proposed framework. (2) Compared to MRC, our framework reduces the complexity of creating well-designed question templates and is more robust to different templates (see Section <ref>). Besides, we concat the names of all slot types into the input sequence to construct direct connections between different slot types, while MRC makes independent predictions for each slot type. Along with our proposed framework, we present an inverse prompting strategy to distinguish different slot types for a given entity to avoid the multiple prediction problem <cit.> where the model possibly predicts multiple slot types for one entity span. Different from the above formulation, we take each slot value as input and corresponding slot type as output to build a mapping from entity tokens to entity types. In this way, we force the model to learn explicit distinctions of different types. Inspired by recent parameter-efficient tuning work <cit.>, we also introduce an efficient prompt tuning strategy to boost higher performance by training fewer prompt parameters instead of the whole PLM. Our contributions are three-fold: (1) We propose a simple but strong generative zero-shot prompt learning framework for cross-domain slot filling, which has better generalization capability and robustness than previous work. (2) We present a novel inverse prompting strategy to distinguish different slot types to avoid the multiple prediction problem. Besides, we introduce an efficient prompt tuning strategy to boost higher performance only training fewer prompt parameters. (3) Experiments and analysis demonstrate the effectiveness of our proposed framework, especially for good generalization to unseen slots (F1 +13.44%↑), strong robustness to different templates (Δ F1 +10.23%↑), parameter efficiency (10x fewer parameters). § METHODOLOGY Our model is shown in Fig <ref>. In our framework, we first construct several simple template sentences for the model input, where each sentence includes a slot type question, all slot types and the original query. Then we use a PLM to generate the corresponding slot values. Along with the main task formulation, we perform an inverse-prompting task to warm up the parameters to strengthen the relationship between entities and slot types. §.§ Problem Definition Given a user input sentence containing n words X_input = {x_1,x_2,...,x_n} and slot type sets S = {s_1,s_2,...,s_m}, the slot filling task aims to find all the entities in X_input. For zero-shot setting in our paper, we train models using labeled data from the source domain and make predictions in the target domain. §.§ Generative Zero-shot Prompt Learning Framework We customize the entire task using a generative zero-shot prompt learning framework. Specifically, we concat the question of each slot type, names of all slot types, the input query together to construct the input sequence and take the related slot values as output sequence. We formulate it as follows: what is the slot_type ? {all slot types} x_1 x_2 ... x_n where slot_type represents the queried slot type, {all slot types} represents all slot types across all domains. For slot types that do not exist in the input, we set the answer to special token "none". For each original input query, we construct QA pairs as the same number of slot types[Appendix <ref> shows more details about input and output formats. Appendix <ref> gives the analysis of the inverse-prompting task.]. Label Prompt Construction We do not focus on the question template construction as the previous works <cit.>. Instead, we simply set up the simplest question form of “what is the ?" to highlight the simplicity and effectiveness of our proposed framework. It is worth noting that we also include slot names from all domains in the prompt. The main purpose of this setting is to enhance the interaction between different slot types, so that the model can find the best answer from the original text. Inverse Prompting Previous MRC works suffer from the multiple prediction problem <cit.> where the model possibly predicts multiple slot types for one entity span. To solve such conflict, we design an invert prompting task to warm up the model parameters first. We inverse the original QA pair, that is, set the question to the entities and the answer to the corresponding slot types. This task enables the model to distinguish different slot types for slot entities. In this way, deep semantic relationships between slot types are learned, and the model will learn stronger entity-slot relations. We both train the main task and the inverse task in the same auto-regressive way. Experiments show that first using the inverse task for pre-training then the main task gets the best performance. In addition, since the result of the main task could be "none", we additionally use a negative sampling strategy here to ensure the consistency of the two tasks. We just randomly sample different spans in sentences, and set the corresponding answers to "none". This strategy can also improve the anti-noise ability of the model and improve the robustness of the framework. In our experiments, we set the ratio of positive and negative samples to 1:1. Training and Inference During training, we try two different training strategies: fine-tuning and prefix-tuning <cit.>. In the fine-tuning mode, we first use the inverse task to warm up the model parameters, and then perform the main task. All the PLM parameters are finetuned. For prefix-tuning, the parameters of the pre-trained model are fixed during training, and only the parameters of the new added prefix embeddings are trained. Specifically, we add a trainable prefix embedding matrix in each attention layer of the PLM [Please see more details in the original prefix-tuning work <cit.>.]. This method requires 10x fewer trainable parameters and is more parameter-efficient. During the inference, we only perform the main task. We query for all slot types, and the model directly generates the corresponding slot entities. Compared with the previous method <cit.>, our model will not need additional span matching mechanism, so it will be more concise and intuitive. To ensure task consistency with MRC-based models, we add a post-processing step: if multiple slot types predict the same entity span, we choose the answer with the highest generation probability of the first word. § SETTINGS §.§ Datasets SNIPS <cit.> is a public spoken language understanding dataset consisting of crowdsourced user utterances with 39 slots across 7 domains. It has around 2000 training instances per domain. To simulate the cross-domain scenarios, we follow <cit.> to split the dataset, which selects one domain as the target domain and the other six domains as the source domains each time. §.§ Baselines Sequence Tagging Models: Concept Tagger (CT) proposed by <cit.>, which utilizes slot descriptions to boost the performance on detecting unseen slots. Robust Zero-shot Tagger (RZT) proposed by <cit.>, which is based on CT and leverages both slot descriptions and examples to improve the robustness of zero-shot slot filling. Coarse-to-fine Approach (Coach) proposed by <cit.>, which contains coarse-grained BIO 3-way classification and a fine-grained slot type prediction. In this model, slot descriptions are used in the second stage to help recognize unseen slots, and template regularization is applied to further improve the slot filling performance of similar or the same slot types. Contrastive Zero-Shot Learning with Adversarial Attack (CZSL-Adv) proposed by <cit.>, which is based on Coach and utilizes contrastive learning and adversarial attacks to improve the performance and robustness of the framework. Prototypical Contrastive Learning and Label Confusion (PCLC) <cit.>, which proposes a method to dynamically refine slot prototypes’ representations based on Coach framework and obtains an improved performance. MRC-based Models: QA-driven Slot Filling Framework (QASF). Contrary to previous methods, <cit.> introduced MRC-based framework and leveraged the PLMs to solve the problem. Reading Comprehension for Slot Filling (RCSF) <cit.>, which takes a new perspective on cross-domain slot filling by formulating it as a machine reading comprehension (MRC) problem, which transforms slot names into well-designed queries to improve the detection performance of domain-specific slots. §.§ Implementation Details We use T5-base[T5 is a transformer-based pre-training language model, whose pre-training tasks include text-to-text formulation. We select it as our pre-training model for the consistency between the pre-training tasks and the downstream slot-QA tasks.] as the backbone in our experiments. Model parameters are optimized using the AdamW optimizer <cit.> with a learning rate 5e-05. We set the batch size to 8 and use early stop with a patience 10 to ensure the stability of the model. The prefix length is set to 5 and the dropout rate is set to 0.1. Since RCSF uses the BERT-Large[https://huggingface.co/deepset/bert-large-uncased-whole-word-masking-squad2 ] model, we use T5-large[https://huggingface.co/t5-large] model to match the number of parameters of the model used in RCSF. The number of parameters of T5-base[https://huggingface.co/t5-base], T5-large and prefix parameters are 2.2 billion, 7.7 billion, and 20 million, respectively. For all experiments, we train and test our model on 3090 GPU and use f1-score as the evaluation metric. During the training process, we only do prefix-tuning on T5-base, we fix the parameters of T5-base and only fine-tune the parameters of prefix embeddings. We take the average F1 scores of three experiments as our final result. § EXPERIMENTS §.§ Main Results Results show that our proposed framework GZPL significantly outperforms SOTAs. Our base model GZPL(pt) outperforms PCLC by 15.00% and QASF by 13.37% respectively. We don't directly compare our model with RCSF because it uses two unfair settings: using BERT-large as backbone and pre-training it on the QA dataset SQuAD2.0 <cit.>. Nevertheless, our base model still outperforms RCSF by 2.06%. We adopt another setting to compare with RCSF, that is, change the backbone model to T5-large to ensure that the model size is consistent. We can see GZPL*(pt) with T5-large outperforms RCSF by 6.31%. Besides, we also find using prefix-tuning is better than traditional fine-tuning, which proves prefix-tuning has better knowledge transferability.[GZPL without special annotations represent using prefix-tuning unless otherwise noted in the following section.] §.§ Analysis Generalization Analysis Following <cit.>, if a slot does not exist in the remaining six source domains, it will be categorized into the “unseen slot" part, otherwise “seen slot". The results are shown in Table <ref>. We can see that our method outperforms previous methods by a large margin on unseen slots, while performs slightly worse than RCSF on seen slots. Our model focuses more on the generalizable knowledge transfer rather than overfitting on the seen slots in source domains, so it has stronger generalization ability than the previous methods. Robustness Analysis To verify the robustness of our framework, we change the original template "what is the ?" as RCSF. We still use the complete template during training, but delete some tokens of the template during testing, and the results are shown in Table <ref>. Our model drops slightly by average 4.2% when the template changes, while RCSF drops significantly by 15.6%. This demonstrates that our model is more robust to different input templates. Effectiveness Analysis To further explore the effectiveness of the GZPL under low resource scenarios, we conduct several low-resource settings on source domains, which means only 20, 50, 100, 200 and 500 samples in source domain are used during training stage. As SOTA model (RCSF) does not show results of few-shot experiments, we evaluate RCSF using its open source code. As shown in Table <ref>, the per formance of our model is much better than that of RCSF under low resource conditions. Besides, with only 100 samples (5%), our model maintains 63.13% performance compared to the results using complete source domain data. While using 500 samples (25%), 82.08% performance can be maintained. This demonstrates our approach is more data-efficient than other slot filling models. Ablation Studies To better prove the effectiveness of the label prompt strategy and the inverse-prompt task, we conduct ablation experiments on these two components. Table <ref> illustrates the results of ablation, where “w/o" denotes the model performance without specific module. As we can see, the model will have a slight performance drop (-2.35%) if the slot types in template are removed and the performance of the model will degrade significantly (-3.5%) without the inverse-prompt task. Besides, it is observed that when removing both the label-prompt and inverse-prompt jointly, the performance of the model will drop drastically (-4.69%). This suggests that both of them play an important role in improving the performance. § CONCLUSION In this paper, we introduce a generative prompt learning framework for zero-shot cross-domain slot filling. Based on this, we introduce the label prompt strategy and the inverse prompting to improve the generalization capability and robustness of the framework. Another prefix-tuning mechanism is performed to boost model training efficiency. The exhaustive experimental results show the effectiveness of our methods, and the qualitative analysis inspire new insight into related area. Generally, our framework can be applied to more complex situations, such as nested NER, discontinuous/multiple slots, which we leave to future work. Another interesting direction is to improve the inference efficiency, like concat all the slot questions together and get final results. § ACKNOWLEDGEMENTS This work was partially supported by National Key R&D Program of China No. 2019YFF0303300 and Subject II No. 2019YFF0303302, DOCOMO Beijing Communications Laboratories Co., Ltd, MoE-CMCC "Artifical Intelligence" Project No. MCM20190701. § DETAILS ABOUT THE INPUT AND OUTPUT FORMATS Table <ref> shows an example of how to perform slot filling tasks for a user query under our settings. As shown in the table, since we already know the slot type information for the domain the data belongs to, we will customize the unique questions for each slot type according to our template and the model then generate the answers for each question. The answer can be one or more spans in the original sentence, or be the special token "none". It is worth noting that when a slot type corresponds to multiple slot entities, the answer will be separated by commas. However, this situation hardly exists in the Snips dataset, so it is rare to have multiple spans as answers when testing. § ANALYSIS OF THE INVERSE-PROMPTING TASK To further explore whether our auxiliary task alleviates the problem of repeated generation, we verify its effect through the following two metrics: precision and recall score. We use these metrics based on our recognition that repeated generation will result in more entities being predicted. On the one hand, this will improve the recall score, and on the other hand, it will hurt the accuracy of the model prediction. The experimental results are shown in Figure <ref>. As can be seen from the figure, after adding this inverse-prompt task, the recall-score of the model decreased by 3%, while the precision-score increased by 5.5%, which also increased the overall f1-score by 2.4%. We also conducted a case study on the output of the model, and the results are shown in Table <ref>. After the tasks are added, the repeated generation of the model is significantly reduced. These results above illustrate that the proposed task enables the model to learn deep relationships between slot types, thereby reducing the problem of repeated generation. § LIMITATIONS AND FUTURE WORK The current work does achieve better performance than previous methods, but processing only one slot type at a time also reduces the efficiency of the model. In the future, we will explore how to maximize model efficiency. It would be an interesting challenge to generate answers for all the slots at once without degrading the effect of the model. Also, we will also try to apply our framework to more scenarios, such as NER and other tasks to explore the adaptability of the proposed method.
http://arxiv.org/abs/2307.01812v1
20230704163339
Testing the Potential of Deep Learning in Earthquake Forecasting
[ "Jonas Koehler", "Wei Li", "Johannes Faber", "Georg Ruempker", "Nishtha Srivastava" ]
physics.geo-ph
[ "physics.geo-ph" ]
bibliography.bib shapes,arrows, chains, positioning, shadings, calc, decorations a4paper, total=160mm,257mm, left=25mm, top=20mm, sciabstract Testing the Potential of Deep Learning in Earthquake Forecasting Jonas Köhler,^1,2, † Wei Li,^1 Johannes Faber,^1,3 Georg Rümpker,^1,2 Nishtha Srivastava^1,2,∗ ^1Frankfurt Institute of Advanced Studies, Ruth-Moufang-Str. 1, 60438 Frankfurt am Main, Germany ^2Institute of Geosciences, Goethe-University Frankfurt, 60438 Frankfurt am Main, Germany ^3Institute for Theoretical Physics, Goethe Universität, 60438 Frankfurt am Main, German Corresponding Authors; E-mail: ^†[email protected], ^∗[email protected]. ========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= § ABSTRACT Reliable earthquake forecasting methods have long been sought after, and so the rise of modern data science techniques raises a new question: does deep learning have the potential to learn this pattern? In this study, we leverage the large amount of earthquakes reported via good seismic station coverage in the subduction zone of Japan. We pose earthquake forecasting as a classification problem and train a Deep Learning Network to decide, whether a timeseries of length ≥ 2 years will end in an earthquake on the following day with magnitude ≥ 5 or not. Our method is based on spatiotemporal b value data, on which we train an autoencoder to learn the normal seismic behaviour. We then take the pixel by pixel reconstruction error as input for a Convolutional Dilated Network classifier, whose model output could serve for earthquake forecasting. We develop a special progressive training method for this model to mimic real life use. The trained network is then evaluated over the actual dataseries of Japan from 2002 to 2020 to simulate a real life application scenario. The overall accuracy of the model is 72.3%. The accuracy of this classification is significantly above the baseline and can likely be improved with more data in the future. § INTRODUCTION Earthquakes are some of the most destructive and unpredictable natural phenomena on earth, and their forecasting has posed a significant problem for seismologists. The ongoing extensive data collection coupled with a growing understanding of seismic processes could have the potential to improve this in the future. Previous forecasting attempts Smith1981, Main1989, Smyth2011, Gulia2019 have frequently used the empirical Gutenberg Richter Law Gutenberg1944 log_10 N = a - b M which gives a logarithmic relation between the number of earthquakes N above a certain magnitude M. The b value, gives an estimate for the occurrence of large earthquakes compared to smaller earthquakes. Furthermore, its spatial distribution can be interpreted as a proxy for the distribution of seismic stress. Therefore, the b value has long been considered suitable to be used for earthquake forecasting, e.g. b value time series anomalies are used as precursors to large events Smith1981, Main1989, for earthquake rate forecasting Smyth2011, or to discriminate foreshocks from mainshocks Gulia2019. Over the past decades, science has witnessed increased use of Deep Learning, especially leveraging the fast development in image processing LeCun2015. With the ever increasing amount of seismic data over time, this shift is also well-received in seismology and is successfully applied in tasks such as event detection and magnitude estimation using seismogram data Chakraborty2022 or GPS (HR-GNSS) data Quinteros2023, seismic phase picking Li2022, synthetic waveform generation lehmann2023 and more applications are likely to follow Mousavi2022. Deep Learning for Earthquake forecasting has equally produced some promising results recently Shan2022, Herrera2022, Stockman2023, Fox2022. One of the most common methods in earthquake rate forecasting is the Epidemic Type Aftershock Sequence (ETAS) modelling Ogata1993. It is based on a point process to model the temporal activity, where a base rate of earthquakes generates aftershocks. While this model cannot be used to forecast main shocks, it is quite successful at estimating the rate of aftershocks. However, there are no studies which combine the power of advanced deep learning architectures to handle larger data series and spatio-temporal b value distributions in order to do earthquake forecasting. In this work, we applied neural network architectures such as an autoencoder, Temporal Convolution and 2D convolutions to test the potential of deep learning in earthquake forecasting. We use Japan as the study region which is visited by a high number of earthquakes every year, providing an ample amount of data to work with Wakita2013. Furthermore, it has an extensive network of seismic stations resulting in a good catalog completeness Nanjo2010. The tectonics of Japan is primarily characterized by the eastward subduction of the Pacific Plate beneath the Okhotsk Plate to the north and the Philippine Plate to the south. Concurrently, the Philippine Plate also undergoes subduction beneath the Eurasian PlateBird2003. For the purpose of this study, we focus on the subduction zone of the Pacific and Okhotsk Plate. The problem posed by a forecast of this type is twofold: (i) the limited training data, and (ii) the change of the underlying system by the seismic progression of the region. To account for this, we develop a progressive training routine using temporal increments to train the model continuously with a constant learning rate. With this approach, we do not have dedicated testing and validation sets, but rather a new testing-and-validation set for every new period. As we only train once, sequentially forward in time, no future information seeps into the model. § MATERIALS AND METHODS §.§ Catalog Preparation For this study we focus our attention on the region [20^∘, 50^∘] × [120^∘, 150^∘] around Japan, as well as its subset [35^∘, 46^∘] × [135^∘, 146^∘] (See Figure <ref>). This subset is chosen to simplify the system: it reduces the relevant plate boundaries from three to one, and limits the data to an area where most larger earthquakes occur in the shallow first 70 km. We use the ISC Catalog ISC_1of3, ISC_2of3, ISC_3of3 from 1999-01-01 to 2019-12-31 to create our dataset (accessed date is March 11th, 2022). ISC has better completeness than USGS catalog for our area of interest. For our date and depth range, the catalog consists 3,111,016 events, recorded in a multitude of different magnitude types. We use the most common ones (, , , , ) and convert them to moment magnitude , which is not widely present in the catalog, using the relations given in Sawires2019. If the magnitude is given in more than one of the listed magnitude types, we prioritize the magnitudes in the order mentioned in the the bracket earlier. By restricting the use to only those magnitude types, we only lose 1884 events of smaller magnitudes (≤ 3.5) for the whole region. Furthermore, we also limit the depth of earthquakes to 70 km, so only shallower earthquakes are considered. This has two main reasons: on the one hand, catalog completeness changes with depth and we deem it preferable to have a consistent catalog completeness, on the other hand, the b value empirically changes with the depth, which we do not take into account. Limiting the depth therefore should reduce the error we acquire by ignoring the depth in our b value calculation: We keep deep strong earthquakes from influencing the surface b value. The dataset contains the 2011 Tōhoku Earthquake, a megathrust event which significantly changed the distribution of earthquakes in the area of interest. This is likely detrimental to the results but unavoidable, since the catalog before 1999 is less complete and the available time after 2011 is too short and still filled with an increased seismicity due to the aftershocks of the Tōhoku Earthquake. The cumulative magnitude plot for converted and unconverted data is shown in Figure <ref>, which shows a clear improvement in the linear behavior postulated by the Gutenberg Richter law. This is an important property, as we depend on the stability of the b value for our further analysis. §.§ b-Value Calculation Using the converted catalog we now calculate the b value on a fine 0.1^∘ by 0.1^∘ degree grid for each day after 2000-01-01. For each of those boxes, the b value is calculated using the relation from Aki1965 on the earthquakes which occurred within a 0.25^∘ radius of the location center and within the last 365 days, akin to a cylindrical stencil. This yields a b-value array of size 7305 × 300 × 300. If there are fewer than 2 earthquakes, we set the b value to 0, as well as in the case where all magnitudes for one cylinder have the same value. We clip the b value between 0 and 2, as there are some instances with three earthquakes and a very high b values. This method leads to unrealistic b-value results in the border regions of the seismic station coverage, as b=0 and b=2 cells border frequently, but empirically works well in this study. Dataset Creation The creation of the dataset is represents an important step for a machine learning approach, because it embeds the information and format that the network should learn. For our classification problem, we define two classes, each referring to a timseries of spatial b value reconstructions: (1) an earthquake class (referred to as ), defined by an earthquake of magnitude ≥ 5 on the day after the series ends (2) an class, defined by the absence of such an event: * No earthquake with ≥ 4.2 within a 0.8^∘ L1 radius * No earthquake with ≥ 4.2 within ± 7 days The stronger conditions on the class (as compared to “anything that is not ”) was introduced to increase the contrast between the two classes. The classification is performed on our own defined Deep Learning architecture which we test against other published networks. As our b-value block starts on 2000-01-01 and we require 512 days of history for our classification later, the classification is limited to events after 2001-05-27 (2000-01-01 + 512 days) and 2019-12-31, in which time there are 4172 ≥ 5 events, 2328 of which fall in the area used for training (green box, Figure <ref>). For each of those events we carve a 512 × 32 × 32 shaped sample centered around the epicenter of the event out of the b value block, so that the earthquake would happen on the 513th day, exactly one day after the sample block ends. The hypothesis here is that there are characteristic patterns in the b value prior to a large event Main1989, Gulia2019, and the network is supposed to find them. On the other side of the 4172 larger events, we also need a class that constitutes “calm” situations in which spatiotemporal sequences are not followed by a larger event. For non-earthquake class we calculate all position for which all the following criteria (later referred to as SC) are met: * Within one week before and after the selected point as well as within [-0.8^∘, 0.8^∘] × [-0.8^∘, 0.8^∘] around the location there are no earthquakes with ≥ 4.5. * The b values in the 512 × 32 × 32 block to be considered is calculated using at least on average 10 earthquakes. From these possible locations we choose a number equal to the number of class events to maintain a balanced dataset for each training instance. §.§ Deep Learning Approach In principle we designed a two step approach similar to Fox2021 for the classification of the previously defined b value blocks. The idea is, that the autoencoder learns the “normal” state of the system, so when the system is in an abnormal state, the reconstruction error will be different. A second network, will then be used on the reconstruction error to classify the input between “normal” () and “abnormal” () states. Empirically we have found, that using this two stage architecture works better than using the classifier immediately on the raw data. In the first step we take the [32 × 32] b value array and run it through an autoencoder. We then take the difference between reconstruction and input to create an array of reconstruction error. This is done for all 512 instances per block to create the input for our second step. The process is also illustrated in Figure <ref>. The second step consists of a Convoluted Dilational Network (CDN), an architecture presented here and developed specifically to deal with 2D spatial and 1D temporal data. The CDN is used to classify the b valued difference blocks into the two categories of “followed by a large earthquake” (class ) and “not followed by a large earthquake” (class ). Both networks as well as their training processes are described in detail in the following two sections. §.§.§ Autoencoder The autoencoder (AE) is trained on a dataset of 250,000 images of 32 × 32 pixel values of the b value from the whole region of which samples could be taken in for the classifier. This includes all 365+512 days before an earthquake at that location (which would then be centered in the 32 × 32 image) as well as all locations that can possibly be selected according to the SC criterion defined above. 80% of those images are used as training data, 20% are used for testing. The training is performed using mean squared error as the loss, and the model is trained until the validation loss does not improve for 50 epochs. We tried out different architectures for this task and a network based on dense layers, dropout, and skipped connections outperformed the alternatives (different variations of those three components and 2D Convolution layers). The final model is as follows: The input is flattened to 1024 dimensions, and each subsequent dense layers cuts that in half. The smallest layer is 32, after which we double the dimensions until we reach 1024, which we reshape to 32 × 32. The first two layers have a dropout of 0.5, the skipped connection is after the second dropout layer dropout layer (256 dimensions) to the second to last layer. §.§.§ Classifier The second part of our forecasting pipeline consists of a classifier that takes a timeseries of the reconstruction error of the autoencoder. The data should be classified in two categories, referred to as (earthquake with a magnitude ≥ 5 ) and (data selected according to SC). This gives the network spatiotemporal information about the b value leading up to the point of interest, 1.6^∘ degrees in either direction and 512 + 365 days (with an overlap of 365 days) in the temporal axis. The architecture is the combination of 1D convolutional layers in time (TCN Lea2017) and 2D convolutional layers in space. The spatial dimensions are reduced with 3D Convolutional layers (with a kernel of shape [1 × 2 × 2] in order to only act on the spatial dimensions) and the temporal dimension is reduced using 1D Convolutional layer of increasing dilation. Between the different types of convolutions we apply the necessary reshaping operations, and every 1D convolutional layer is followed by an activation layer (Leaky ReLu with α = 0.1). Due to the changing dimensions we did not use the skip connections as a ResNetHe2016 architecture would, although this could be implemented by way of a maxpooling layer as an alternative path to the convolutions. The arcitecture, as well as the intermittent dimensionalities of the data are shown in Figure <ref>. After the final convolution layers we have 32 channels, which are reduced to a single output using one dense layer. §.§.§ Architecture Comparison The CDN architecture used in this paper was tested against several state-of-art deep neural networks for similar data structures. An overview of the results, parameters and speed is shown in figure <ref>. Temporal Convolutional Network (TCN) Lea2017. This architecture consists of 9 1D Convolutional layers with increasing dilation, reducing the temporal dimension from 512 to 1. Initially, the input of a single day is flattened to a 1D array, which is retained throughout the Temporal convolution. As a last step, the 1024 dimensional array is reduced to one value. Long Short-Term Memory (LSTM) Hochreiter1997. For the LSTM Network we use 4 Layers with 1024, 256, 64, and 16 units, the first three of which return sequences. The last layer is a dense layer, reducing the output to one dimension. ResNet He2016. Our 3D data with the temporal component is not an ideal input for ResNet, as the temporal component is not treated in an appropriate way. For this architecture we use 7 × 3 ResNet Blocks with the channels doubling every three blocks (starting with 1 channel). One block consists of twice, a 2D Convolutional layer, batch normalization, and a ReLU activation. After each group of three dense ResNet Blocks, we use a Maxpooling layer to reduce the temporal extent by a factor of two (this is not in accord with the original network from cite He2016, but necessary to reduce the dimension of the problem to accommodate hardware limitations). Each ResNet Block can be skipped. Dilated Recurrent network (DRN). With the Dilated Recurrent Network we combine the approaches of ResNet and TCN. For this we use 1D Convolutions in time (TCN-like) while also providing skipped connections via maxpooling in time. The DRN blocks consists of twice, a 1D convolution layer, batch normalization, and activation, while the skipped connection via maxpooling layer connects just before the last activation. To achieve the correct dimensions the maxpooling has a pool size of 2 and a stride of two. This DRN is not to be confused with the Dilated Recurrent Neural Networks from Chang2017DRN, but there are some similarities. Convolutional Dilated Network (CDN). The Convolutional Dilated Network is an improvement on the Dilated Recurrent Network and the basis for the results presented in this paper. The goal while designing this network was to combine spatial reduction from the TCN while at the same time keeping the 2D spatial features of the data, instead of flattening the spatial dimensions as in the DRN. Reshaping the data between 1D Convolution from the TCN and the 3D Convolutions for the feature extraction adapts the network to the given problem. Similar to TCN and DRN, the CDN uses 9 blocks to reduce the 512 dimensional time axis to 1. At the same time, to reduce complexity, we reduce the spatial dimensions every second block (starting wit the first one). A CDN block starts with a 3D Convolutional layer with a (1,2,2) kernel and a (1,2,2) stride for the odd numbered blocks, and continues for all blocks with a reshape to enable the 1D Convolution, a reshape back, a batch normalization and an activation. The 3D convolution increases the number of channels by a factor of two, the 1D Convolution keeps it constant. The final 32 dimensional output is reduced to a scalar with one dense layer. Even though the DRN has the highest overall accuracy, we use the CDN in this paper, for the following reasons: * The results are more stable between runs. * The training is faster. * The network uses less parameters. * The area under the curve (AUC) of the ROC is larger, which points to a more stable threshold and is generally regarded as a better measure for the evaluation of models Ling2003, Huang2005. §.§.§ Training the Classifier Training the classifier without violating causality and in a way that can be used in applications required an evolving training and testing set, Figure <ref> shows how this is accomplished. The training process is divided in meta epochs. A meta epoch refers to both, a training cycle consisting of 20 epochs of normal training, and the temporal extent of the validation data, which is 30 days. In order to progressively train the model on newer data, each meta epoch trains and validates on new data, which is also why we do not use a dedicated testing set in this work. At the beginning the whole time domain is split in chunks of 30 days. Then training is performed on an increasing number of those 30 day chunks / meta epochs. Training starts with 6 such chunks (because starting with less means starting with an almost empty training set). Therefore the first meta epoch is labeled meta epoch 6, because it contains 6 chunks. Each meta epoch is trained for 20 normal epochs. The training and validation sets change each meta epoch (See Figure <ref>): The training data contains all class events up to the current time, as well as a random selection of allowed class events from the possible locations allowed by the SC criteria defined above. This exposes the network to a wide variety of events over time. In order to prevent overfitting for older events, to which the network would be exposed more often compared to newer events, we weigh all samples from the training data that were not newly added this meta epoch using a weight so that the total new data equals (in weight) the old data. The validation set is the chosen from the next meta epoch. It contains all events of that 30 day period as well as an equal amount of randomly chosen events from the same period. § RESULTS The training and validation here are done in 30 day intervals, where the model is trained on some amount of data, validated on the next 30 days, and then trained on the combined data, and trained on the next 30 days. This training method is explained in more detail in Figure <ref>. Owing to this specialized training procedure, giving a normal model evaluation such as the final accuracy is not as informative compared to other models. We therefore name the single training periods meta epochs and report the relevant results for those meta epochs instead of a total result. As the progressive training changes the model and the model is applied to new data for each determination of the accuracy, the overall accuracy of the model is difficult to determine. However, the accuracy is the most common metric to measure the model performance, so Figure <ref> shows the accuracy over time. The time intervals are given in increments of 30 days, which also correspond to the progressive training cycles and are referred to as meta epoch from now on. As the 30 day forecasting period (= 1 meta epoch) usually contains only one to three earthquakes (see Figure <ref> for more details on the distribution), resulting in rather quantized accuracy values, Figure <ref> shows the accuracy averaged over 5 meta epochs as well. Figure <ref> shows, that after a few meta epochs at the beginning the accuracy improves to 75%-80% and only declines during the last two years to around 60%. The optimal discriminating threshold between the and classes is not necessarily 0.5 (with a model output between 0 and 1), and the Receiver Operator Characteristic (ROC) Bradley1997 helps to determine the best value to classify the input by plotting the true positive rate against the false positive rate, allowing to find the optimal combination of both. Figure <ref> shows the ROC as well as the resulting best confusion matrix Ting2017, which displays the number of true/false positives/negatives. The overall accuracy of the model, including the caveats discussed above, is 72.3%, including the initial “untrained” phase. It is calculated on the final model of each meta epoch, with the validation data of that epoch. Targeting the ≥ 7 Earthquakes Since forecasting earthquakes with ≥ 7.0 correctly is a priority from a hazard analysis point of view, we also target the 11 earthquakes with ≥ 7.0, listed in Table <ref>, in addition to the class. Those earthquake locations are shown in panel A of Figure <ref>. In order to see how the model output changes over time for those locations, we also show the model output for the whole time domain at the earthquake's location, with the model parameters being set to correspond to the time of the forecast, so that no future information is used. This is seen in the panel B of figure <ref>, together with the time of the earthquake (vertical line) and the model output for that day. This illustrates the abilities and drawbacks of the model: Generally the output is quite high in high seismicity regions. In the beginning, the untrained model learns fast to increase the output , which can be seen in all traces from the beginning of 2002 to the end end of 2003. However, then the model adapts and the forecasts begin to differ. Earthquake C is captured quite well, the model output increases just before the earthquake occurs and decreases afterwards. The other earthquakes need closer examination, which is provided in Figure <ref>. Here, the model output for earthquakes E and G is shown in greater detail. The model output is augmented with the distance and magnitude of nearby earthquakes: the right scale shows the magnitude of nearby earthquakes, which are shown (if > 5) as vertical lines in the respective model output, their height signifying their magnitude on the right scale. The color corresponds to the overlap in pixels for the [32 × 32] geographical location. Earthquakes with > 6 are also marked again above the model output for greater color perception by the reader, with the color of the upper triangle corresponding again to the overlap between the presented earthquake and nearby earthquakes, and the lower triangle corresponding to the magnitude. The respective colormaps are on the left and consistent for the figure. A complete version for all 11 Earthquakes can be found in the supplemental material, see Figure <ref>. § DISCUSSION In order to investigate if the overall forecasting ability of the model is valid rather than an assortment of artifacts we will have a closer look at several possible sources for errors: magnitude, depth, quality (or the number of earthquakes used for the b Value calculation), and location. §.§ Magnitude Dependence If the premise, that the probability of an earthquake happening can be forecast by using changes in b value, it would suggest, that larger events are easier to forecast, simply because there are innumerate small earthquakes with < 3 that do not all have some large pattern of b value changes as a precursor. However, the model is trained mostly on forecasting smaller earthquakes, as the validation data contains mostly those (following the Gutenberg-Richter law Gutenberg1944), so it is a priori unclear, if the model can work for large earthquakes. In order to test this, we analyze the magnitude dependence of the networks model outputs, as shown in Figure <ref> (A). The left plot shows the prediction value for all samples, their magnitude and their meta epoch. While they cluster around 0 and 1, there is a clear trend towards 1. The mean prediction of samples averages over five meta epochs is shown in Figure <ref> B, clearly showing good results for until roughly meta epoch 180. The right plot shows the prediction distributions for six magnitude bins, the center line is marked with the actual predictions, and the mean (colored) and average (black) for each bin are marked. It seems like smaller earthquakes are actually easier to forecast, which is probably due to two reasons: (1) the pattern that the network recognizes depends on the size of the earthquake, which seems likely, but since there are fewer large earthquakes, the network does not learn their pattern as well, or (2) smaller earthquakes are more commonly aftershocks and the networks just learns to predict more earthquakes in a periods after larger earthquakes. While (2) would be detrimental to our efforts, not all the performance could be explained away using this argument, because even the larger events (which are not aftershocks) are generally well predicted with significant accuracy. §.§ Depth Dependence Earthquakes vary with depth and location, in our region they appear at higher depths in the subducting pacific plate, as well the shallow zones of the Okhotsk plate in the west and the pacific plate in the east. Depth is not a parameter the model is given in any way (apart from the depth limit of 70 km in the b value calculation), so in principle mixing different origins of earthquakes could pose a problem. Figure <ref> B shows the effect of the depth on the model output, which seems to be negligible, meaning there is no bias that only deep or shallow earthquakes are treated correctly. §.§ Quality Dependence The quality of the b value data used for the forecasting process is bound to have an influence on the accuracy of the forecasting. Quality here is defined as the certainty which with the b value is determined. For this purpose we look the average number of earthquakes used to calculate all single values of b in the input array. A detailed explanation of who the b value is calculated can be found in <ref>. If the number is low, the b values will be calculated on only a few events, which leads to drastic changes between neighboring cells in the input data (even after the autoencoder is used), if a single earthquake is added or removed. Theoretically, better b values will correspond to a better prediction, however better b values will also correspond to either a higher seismicity region, or a region better covered by seismic stations. However, the quality is not given to the model directly, is would have to be determined from the input data. The impact of the quality is shown in the bottom two panels in Figure <ref>. The forecasting is clearly dependent on the quality of the data, however, a higher quality does not translate to a higher model output, which is good, since a higher quality does relate to a higher general seismicity. That fact that this does not lead to an immediate higher model output shows the network learns more than just a seimicity estimate. §.§ Location Dependence Another possible source of errors is the location. The network could learn where in the domain the sample was taken from and simply predict earthquakes in the regions where there are many earthquakes. While the coordinates of the input are not given, some features can consistently be associated with certain locations, first and foremost the boundary of the area where the b value is calculated (this consistency can be seen in Figure <ref>). Figure <ref> shows the relevant data to determine this has a slight impact: Figure <ref> A shows the locations for the Training data sets, Figure <ref> B the locations for for validation sets. Figure <ref> C and D show the mean model output for Training and Validation instances respectively. Figure <ref> C therefore shows very strong overfitting, but Figure <ref> D shows a clear pattern of accurate forecasting. The number above each of these plots is the weighted mean value in the image. Figure <ref> E and F illustrate the locations that are included in the validation and training set (at different times), and show. §.§ Reproducibility During the training process we noticed a heavy dependence on the initial conditions, even to the point of some models being unusable for some initial conditions. This can be seen for the DRN Network in Figure <ref>. Since a random initialization of the network is common practice, we conducted a study of different seeds to survey this behaviour and find a good seed to work with. For this, we iterated over a fixed seeds for and . We trained a total of 10 model and chose the best one to conduct the analysis presented in this paper. The results of all of them are shown in the supplementary material, see Figure <ref> A. Impact of the Autoencoder We repeated the same reproducibility study, only omitting the autoencoder, using the b value directly as an input to the classifier network and repeating this ten times. With the autoencoder the average accuracy is 68.6% with the best accuracy 72.3%, without the autoencoder the average accuracy is 65.0% with 68.7% as the best accuracy. The autoencoder therefore improves the performance by roughly 3.6 %. The idea behind using the autoencoder is, that it learns the normal b value images, and that by using the difference the abnormality of the system is highlighted. The results of both sets of 10 models are shown in Figure <ref>. §.§ Comparison to other work There seems to be some possibility in using b value distributions to forecast earthquakes Smith1981, Main1989, Smyth2011, Gulia2019. While the aforementioned publications worked by manually (or by the use of classical statistics) looking for patterns in the data, this process is the core ability of deep learning, which makes it a promising approach. This shifts the problem of earthquake forecasting to one of data selection and preparation, where the pattern recognition is left to a deep learning network. §.§ Some Caveats Concerning the results shown in Figures <ref> and <ref>, the model has the tendency to give long stretches of high outputs for high seismicity regions, however it does not always give a high output for these regions. There are several interpretations of this, the most charitable being, that the occurrence of the large earthquake is “ready” at some point (leading to high model output) and will stay there until the tension is released at a random point in time. This could also distribute the high model output spatially, as > 7 earthquakes have large rupture areas, even exceeding 1000 km for > 9Thingbaijam2017. Following this idea, the model output does not decrease immediately after a large earthquake, because there is still tension, whether new through the earthquake, or retained from before, and it will take a while to release it. Less charitable would be the interpretation that the model just always gives a high output in high seismicity regions, but this does not seem to be the case, e.g. earthquake D in Figure <ref>, but also the low model output times at the other locations – the earthquakes are in regions that should always be considered high seismicity. A problem with this dataset is the Tōhoku Earthquake in the temporal center of the data (day 3576 of 6793). An earthquake of that magnitude has the potential to change the seismic behaviour of a region, and it produces a long sequence of significant aftershocks which must be part of a forecast (because they are big enough to be destructive themselves) but are so numerous that they can skew the results in a Machine Learning setting. The impact is visible in all temporal figures presented in this paper. This also has an impact on the previous point of the consistently high model outputs: Since the aftershock sequence is part of the forecast, a high output is somewhat desirable and necessary, if there are aftershocks, even though it would clearly be better to actually have a low model output in between the aftershocks. Apart from the data considerations, there is also a noticeable variability in the results even with smaller changes in the architecture (4-5%). Even though many such changes were made in the process of this work, it is still likely that a better architecture, even of similar complexity, could be found. Especially since there many choices during this workflow that can with good reason be done different ways, optimizing the combination of all those choices can likely improve the overall performance considerably. § CONCLUSION While there are obvious challenges in our model, such as the location dependent overfitting, these do not account for all of the models forecasting performance. Using only roughly 2300 Earthquakes to train the model shows, that with growing the dataset in the future, better forecasts should be possible. This work also shows the importance of improving the seismic network coverage in earthquake prone regions around the pacific ring of fire where the coverage, and consequently the magnitude of completeness is poorer, for example in Indonesia, Kamchatka, and the west coast of South America. While our results are quite promising for advancing the problem of earthquake forecasting, there are problems with the short time frame of the training data as well as validation against a fair baseline assumption remain. Training on more regions which don't have extreme seismicity changing events and including more events from the more recent past as well as changing starting days will very likely improve the models performance in the future. § ACKNOWLEDGEMENTS This research is supported by the “KI-Nachwuchswissenschaftlerinnen" – grant SAI 01IS20059 by the Bundesministerium für Bildung und Forschung – BMBF. The calculations were performed at the Frankfurt Institute for Advanced Studies’ GPU cluster, funded by BMBF for the project Seismologie und Artifizielle Intelligenz (SAI). We thank Megha Chakraborty, Darius Fenner, Dr. Claudia Quinteros, Professor Geoffrey Fox and Dr. Kiran Thingbaijam for their helpful discussion. We acknowledge the help and advice from Prof. Dr. Horst Stoecker. The research has made extensive use of TensorFlow Tensorflow2015, numpy Numpy2020, and matplotlib Matplotlib2007. Geographical maps were made with Natural Earth. Free vector and raster map data @ naturalearthdata.com. The training was carried out on Nvidia A100 Tensor Core GPUs.
http://arxiv.org/abs/2307.01068v1
20230703144728
Symmetry group at future null infinity III : gravitational theory
[ "Wen-Bin Liu", "Jiang Long" ]
hep-th
[ "hep-th" ]
figure/ utphys equationsection
http://arxiv.org/abs/2307.01251v1
20230703180001
Analysing quantum systems with randomised measurements
[ "Paweł Cieśliński", "Satoya Imai", "Jan Dziewior", "Otfried Gühne", "Lukas Knips", "Wiesław Laskowski", "Jasmin Meinecke", "Tomasz Paterek", "Tamás Vértesi" ]
quant-ph
[ "quant-ph" ]
1]Paweł Cieśliński^*^, 2]Satoya Imai^*^, 3,4,5]Jan Dziewior 2]Otfried Gühne 3,4,5]Lukas Knips 1,6]Wiesław Laskowski fn2 3,4,5,7]Jasmin Meinecke 1,8]Tomasz Paterek 9]Tamás Vértesi These authors contributed equally as co-first authors [1]organization=Institute of Theoretical Physics and Astrophysics, University of Gdańsk, city=Gdańsk, postcode=80-308, country=Poland [2]organization=Naturwissenschaftlich-Technische Fakultät, addressline=Walter-Flex-Straße 3, city=Siegen, postcode=57068, country=Germany [3]organization=Max Planck Institute for Quantum Optics, city=Garching, postcode=85748, country=Germany [4]organization=Faculty of Physics, Ludwig Maximilian University, city=Munich, postcode=80799, country=Germany [5]organization=Munich Center for Quantum Science and Technology, city=Munich, postcode=80799, country=Germany [6]organization=International Centre for Theory of Quantum Technologies, University of Gdańsk, city=Gdańsk, postcode=80-308, country=Poland [7]organization=Institut für Festkörperphysik, Technische Universität Berlin, city=Berlin, postcode=10623, country=Germany [8]organization=School of Mathematics and Physics, Xiamen University Malaysia, city=Sepang, postcode=43900, country=Malaysia [9]organization= MTA ATOMKI Lendület Quantum Correlations Research Group, Institute for Nuclear Research, city=Debrecen, postcode=4001, country=Hungary Randomised measurements provide a way of determining physical quantities without the need for a shared reference frame nor calibration of measurement devices. Therefore, they naturally emerge in situations such as benchmarking of quantum properties in the context of quantum communication and computation where it is difficult to keep local reference frames aligned. In this review, we present the advancements made in utilising such measurements in various quantum information problems focusing on quantum entanglement and Bell inequalities. We describe how to detect and characterise various forms of entanglement, including genuine multipartite entanglement and bound entanglement. Bell inequalities are discussed to be typically violated even with randomised measurements, especially for a growing number of particles and settings. Additionally, we provide an overview of estimating other relevant non-linear functions of a quantum state or performing shadow tomography from randomised measurements. Throughout the review, we complement the description of theoretical ideas by explaining key experiments. randomised measurements t-designs quantum entanglement detection quantum entanglement characterisation shadow tomography Bell inequalities reference frame independence § INTRODUCTION A key difference between classical and quantum information processing is the number of possible measurements. While for a classical bit, only a single type of measurement is possible, i.e. reading out its binary value, an infinite number of different measurements can be applied to even a single quantum bit. Broadly speaking, this review explores the possibilities which arise when these measurements are chosen randomly. In fact, in many practical scenarios, a certain amount of randomness is unavoidable, for example, due to imperfect measurement setups or transfer of states through communication channels. It was observed in several of these scenarios that randomised measurements are powerful tools for the analysis of quantum systems <cit.>, but only in recent years a systematic approach was developed. In particular, there are now detailed strategies how randomised measurements allow to detect and characterise entanglement, to determine certain invariant state properties and certify Bell-type quantum correlations. Let us begin by characterising more precisely what is meant by the term “randomised measurements”, which is sometimes used in different senses in the literature. In general, “randomness” has two aspects, a lack of control to choose a particular measurement setting and a lack of information about the chosen setting. A further distinction is the degree of fluctuation of the randomness, i.e. how much its effect varies with time or whether it is fixed. For illustration consider a model of the measurement of a quantum state by a single observer presented in Fig. <ref>. A source sends a single qubit prepared in a particular quantum state whenever the experimenter presses the top button at the source box. The same state is prepared each time the button is pressed. The qubit is then measured by an apparatus indicating the binary measurement outcome. Whenever the experimenter presses the top button on the measuring box, a new measurement setting is chosen randomly, i.e. the experimenter lacks control of the orientation of the subsequent setting. However, as in the case of Fig. <ref>a, it is still possible that the orientation of the randomly chosen setting is known, even if its choice cannot be controlled. The requirement for this knowledge is the existence of a reference frame which allows the observer to infer the relative orientation of the chosen settings in the space of possible settings. A simple practical example is a spatial coordinate system which allows to determine the orientation of spatially sensitive measurement devices in a lab. The situation in Fig. <ref>a is also an instance of fluctuating randomness, since the random orientation of the setting changes for each measurement. Scenarios as these, where there is more information about the setting than control over it, give rise to a range of possibilities to analyse the state which were reviewed, among other scenarios, in Ref. <cit.>. For example, one can characterise complex quantum systems, estimate non-linear functions of the quantum states and also perform quantum tomography. Notably, the knowledge of the randomly selected measurement setting provides information about the measured state even when there is only one source emission per setting, i.e. the experimenter presses the button on the measurement box each time she presses the source button. Interestingly, it is possible to obtain information about a physical system even when a reference frame is not available. In such a case the experimenter does not know the orientation of the randomly chosen settings, as depicted in Fig. <ref>b. Instead, information about the distribution of a set settings can be harnessed to obtain information about the state of a quantum system. Additionally, it is required to reliably estimate an expectation value for each randomly chosen (unknown) setting. This corresponds to pressing the source button in Fig. <ref>b) many times (to gather sufficient statistics) before pressing the button on the measurement box again to reorient the measurement direction. Crucially, this scenario does not require a lack of control which is independent of the lack of information. The absence of a reference frame is sufficient to account for the lack of control. Notably this scenario would even be random if the button would have no function, i.e. the state would remain at a fixed setting, such that "random" would simply denote a lack of information about the fixed orientation. This review focuses exclusively on scenarios as these, where randomness can be entirely viewed in terms of a lack of information and no recourse to control is necessary. For multipartite systems shared by several observers another point is the relative alignment of settings, i.e. the existence of a shared reference frame. Even if each observer has access to a reference frame, from now on denoted accordingly as local reference frame, still there might be unknown persistent or fluctuating rotations present between the particular frames of the observers. In some cases, these might only constitute a partial misalignment between the local frames, when for example in a multi-partite qubit state, observers share a single direction of the Bloch-sphere. In general, the establishing of a common reference frame is a physical process and in some practical scenarios, it is of interest to conserve the necessary physical resources <cit.>. The various scenarios of randomness are summarised in Tab. <ref> and classified according to the persistence of the randomness and the degree of availability of information about the setting. Randomised measurements can be implemented in an experiment on purpose or can appear naturally due to noisy environments. Note, that while some types of noise, such as decoherence or losses, do not correspond to random rotations of the reference frames, nevertheless there is a broad range of typically encountered environments, which can be modelled in very good approximation in this manner, formally expressed as unitary operations. Thus, the effects of these environments can be modelled by the transmission channel of the state containing a random unitary rotation, as illustrated in Fig. <ref>. These rotations can be fixed or fluctuating. However, even when fluctuations are present, they need to be slow enough as to allow the collection of sensible statistics with practically the same setting. In an approach which does not require a local reference frame, the experimenter does not need to press her button and a single fixed setting is sufficient to effectively perform a set of random measurements on the state, see Fig. <ref>a. The reason for this is that the unitary can be equivalently understood as acting on the state or acting on the setting and in general there is freedom whether to consider locally rotated states (fixed or fluctuating) or relative rotations of the local reference frames between the observers. In other cases it can be crucial to add another set of local rotations after the unitary rotation induced by the environment. This can correspond for example to a scenario where the noisy environment disturbs the relative alignment between the local reference frames, but each observer still has to be able to choose from a set of local settings in a well-defined local reference frame, as shown in Fig. <ref>b. Alternatively, for example to counter a possible bias from the environment in the selection of measurement settings, one can additionally randomise the measurement setting by selecting the setting from a suitable unbiased distribution, even if the biased distribution is unknown, depicted in Fig. <ref>c. Typical experimental implementations of such transformations are rotated waveplates and polarisers in optical polarisation measurements, or laser driven transitions in ion trap-based experiments. In fact, the quantitative methods to analyse states with randomised measurements described here, rely on a uniform sampling of measurement settings. In contrast to such scenarios with random noise or controlled randomness, there exist also experimental situations in which it can be very difficult or resource intensive to perform this sampling in particular since the amount of settings to sufficiently approximate uniformity is growing exponentially with the system size. For such scenarios alternative techniques are available which allow to extract the same quantities via comparatively smaller sets of locally aligned measurements, so called designs. In this review we present various approaches sorted according to the type of information they are able to extract from randomised measurements. In Sec. <ref>, we provide an intuitive introduction to the concept of randomised measurements based on a two-qubit example and introduce basic mathematical tools. In Sec. <ref> we review various entanglement criteria in terms of correlations between randomised measurements. They provide necessary and sufficient conditions for entanglement in pure states and certain mixed states, and in general give rise to witnesses capable of detecting genuine multipartite entanglement, bound entanglement or distinguishing various classes of entangled states. In Sec. <ref> we describe estimations of local unitary invariants such as purity. We also provide an overview of shadow tomography where randomised measurements play a crucial role. In Sec. <ref> we review approaches to detect Bell non-local correlations between physical systems with randomised measurements. We discuss quantifiers such as the probability of violation and strength of non-locality, present common definitions of genuine multipartite non-locality, and scenarios where even with randomised measurements a violation of a Bell-type inequality is certain. We conclude in Sec. <ref> and gather a list of interesting open problems encountered in the main body. § RANDOMISED MEASUREMENTS IN A NUTSHELL In this section, we explain basic formalism and give a brief overview of the possible applications of randomised measurements to the study of quantum systems. As an illustrative example, we start with the case of a two-qubit system and introduce the distributions of random correlation values as fundamental tools to investigate state properties. We identify the moments of these distributions as quantities which are straightforwardly accessible by randomised measurements and illustrate how they can be used in various criteria. The section includes several relevant concepts, such as quantum designs, sector lengths, local unitary invariants and PT moments. We conclude this section with a discussion of CHSH inequalities and the usefulness of Bell-type inequalities in the context of randomised measurements. §.§ Distributions of correlation functions To understand the general concept of randomised measurements and the difference in observations for entangled and separable states, let us first consider a simple two-qubit example with generalisations and extensions being discussed in subsequent sections. Assume we are given two states, a pure product state |ψ_prod⟩≡ |ϕ^A⟩⊗ |ϕ^B⟩, that we will choose as |00⟩, and an ideal Bell state, say a singlet state |ψ^-⟩=(|01⟩-|10⟩)/√(2), where we denote |0⟩ = (1 0)^⊤, |1⟩ = (0 1)^⊤, and abbreviate the tensor product as |xy⟩ = |x⟩⊗|y⟩. The state |ψ^-⟩ does not admit a product form. In fact, this property is a general definition of entanglement for any bipartite pure state. The two introduced states are now subjected to locally fluctuating randomised measurements. More precisely, many local projective measurements along randomly chosen measurement directions are performed such that a set of correlation values of the outcomes is obtained. The correlation function is a statistical parameter characterising the statistical dependence of the results and is given by the mean of their product. In a two-qubit experiment, where the j-th particle (for j=1,2) is measured in a setting represented by the normalised vector 𝐮_j on the Bloch sphere, with a binary measurement outcome r_j = ± 1, the correlation function reads E(𝐮_1, 𝐮_2) = ⟨ r_1 r_2 ⟩ = [ϱ (𝐮_1 ·σ⊗𝐮_2 ·σ)]. The average is denoted here by ⟨⋯⟩ and is in practice estimated by repeating the experiment sufficiently many times. The last expression in Eq. (<ref>) represents the quantum mechanical prediction for the average measured given the system's state ϱ. We use the short notation σ for the vector of Pauli matrices, σ = (σ_x,σ_y,σ_z), that will also be conveniently enumerated as (σ_1, σ_2, σ_3), such that 𝐮_j ·σ is an arbitrary dichotomic observable, with outcomes ± 1, of the j-th qubit. Such defined correlation functions are well known and feature, among other important applications, in violations of Bell inequalities. Consider now a scenario with a large amount of different, Haar randomly distributed measurements, i.e. directions that are distributed uniformly on a Bloch sphere. For more details and its generalisation see Sec. <ref>. For each randomly chosen set of measurement directions, the experiment is repeated sufficiently many times to obtain the correlation value arbitrarily close to the quantum prediction. Fig. <ref> shows histograms of correlation values for different two-qubit states. A very different behaviour is observed for the Bell state (yellow) and the product state (red). For comparison, this figure also includes distributions for mixed entangled states, i.e. states which cannot be written in a separable form given by ϱ^AB_sep=∑_i p_i ϱ^A_i ⊗ϱ^B_i, where p_i>0 and ∑_i p_i=1. The two presented classes of mixed states are Werner states ϱ_Werner=(1-p)/4 1_4 + pψ^-, and a two-qubit marginal of a W_3 state, that is ϱ^W_3_2 = _3(|W_3⟩⟨W_3|) with |W_3⟩ = (|001⟩+|010⟩+|100⟩)/√(3). The data presented in Fig. <ref> shows that the knowledge of the probability distribution of correlations contains valuable information characterising the state. Although this numerical simulation uses the ideal correlation values as described in Eq. (<ref>), a finite amount of different measurement directions has been chosen, leading to, e.g. the deviations of the yellow distribution from a perfect uniform distribution. It should be noted that, due to the nature of random measurements, states equivalent under local unitary (LU) transformations cannot be distinguished. For example, any maximally entangled two-qubit state gives rise to the same distribution of outcomes as the singlet state and every pure product state is indistinguishable from the product state used to compute the distribution in Fig. <ref>. This is not surprising since all entanglement properties are by definition LU invariant. We elaborate more about this in Sec. <ref>. §.§ Moments of probability distributions A glance at Fig. <ref> shows that the difference between the Bell state |ψ^-⟩ and the pure product state |ψ_prod⟩ is in the variances of their distributions. This immediately raises the question of whether a larger variance can in general imply entanglement. In the following, we expand on this intuition and formulate the corresponding entanglement criterion. To proceed, let us define the moments of the probability distribution for the values of the correlation function as ℛ^(t)(ϱ) = N_t ∫d𝐮_1 ∫d𝐮_2 [E(𝐮_1, 𝐮_2)]^t, where d𝐮_j = sinθ_jdθ_j dϕ_j denotes the measure on the unit sphere which is also the Haar measure, see Refs. <cit.>. Here, N_t denotes a normalisation constant, which is chosen differently across the literature. The moments are invariant under any local unitary transformations of the state such that ℛ^(t)(ϱ) = ℛ^(t)(V_1⊗ V_2 ϱ V_1^†⊗ V_2^†) for any single-qubit unitaries V_1, V_2. Therefore, the moments seem especially suited to capture the correlation properties of ϱ, in the absence of local reference frames. Note that ℛ^(t) vanishes for odd t since the sign of the correlation function is flipped under 𝐮_j → -𝐮_j. Indeed, as seen in Fig. <ref>, the expectation (t=1) is zero in all states. Thus, the first nontrivial result appears for t=2. As discussed in more detail in Sec. <ref>, the second moment gives rise to the following entanglement criterion: If ℛ^(2)(ϱ) > 1, then ϱ is entangled. This result is illustrated in Fig. <ref>, where the Bell state has a large variance compared to the pure product state. We should stress that the criterion (<ref>) is only sufficient but not necessary for mixed two-qubit entangled states. That is, there exist mixed entangled states that this criterion cannot detect. Examples are the Werner states ϱ_Werner for 1/3 < p ≤ 1/√(3), and the two-qubit marginal of the W_3, state ϱ^W_3_2. To distinguish between these states and detect a broader range of entanglement, we need to use higher moments (t>2) to construct more refined entanglement criteria. The details will be discussed in Secs. <ref> and <ref>. §.§ Quantum designs In order to evaluate the uniform average over the sphere in Eq. (<ref>), the concept of designs is very helpful. Consider a polynomial function P_t (𝐱) in n variables with degree t. We call a set X = {𝐱_j ∈ S_n}_j=1, …, K spherical t-design if 1/K∑_𝐱_j ∈ X P_t (𝐱_j) = ∫d𝐱 P_t (𝐱), where d𝐱 is the spherical measure on the n-dimensional unit sphere S_n with ∫d𝐱=1 <cit.>. That is, the integral for any polynomial of at most degree t can be evaluated by knowing the value of the polynomial at K discrete points 𝐱_j of the spherical t-design set X. By definition, the integral at the right-hand side in Eq. (<ref>) is invariant under any rotation on the sphere, so the evaluated expression on the left-hand side is also invariant. In general, if the allowed degree t or the dimension n increases, then a larger set X is required. The details will be discussed in Sec. <ref>. To give a concrete example, let us evaluate the second moment ℛ^(2) using the idea of spherical designs. This corresponds to the case t=n=2. It is well known that a set of K=6 unit vectors on orthogonal antipodals, {𝐱_j = ±𝐞_j: j=x,y,z}, where 𝐞_j are the Cartesian axes, is a spherical 2-design (and also 3-design) <cit.>. Using this spherical design, we rewrite each of the two integrals in ℛ^(2) over a two-dimensional unit sphere as the average over the set of six points on the sphere: ℛ^(2)(ϱ) =3^2 1/6^2∑_j,k=1^6 [E(𝐞_j, 𝐞_k)]^2 = ∑_j,k=x,y,z [(ϱ σ_j ⊗σ_k)]^2, where we choose the normalisation N_2=(3/4 π)^2 and use the fact that the even function [E(𝐮_1, 𝐮_2)]^2 does not change under the sign flip. As a result, the integral over the entire spheres 𝐮_1, 𝐮_2 is replaced by a sum of nine (squared) correlation functions computed along orthogonal directions on local Bloch spheres. Note that higher moments may be found in a similar manner, using designs for larger t. Recalling that ℛ^(2) is LU invariant and a convex function of a state, the separability bound can be found, without loss of generality, by considering the pure product state |ψ_prod⟩ = |00⟩. We therefore arrive at the criterion discussed in the last section, ℛ^(2)(ϱ_sep) ≤ 1 for any two-qubit separable state ϱ_sep. §.§ Bloch decomposition of multipartite quantum states Any single-qubit state ϱ_A can be expressed using the operator basis of Pauli matrices as ϱ_A = 1/2(1_2 + a_x σ_x + a_y σ_y + a_z σ_z). Since Paulis are traceless the overall factor of 1/2 follows from normalisation (ϱ_A) = 1. The positive semidefiniteness of the state ϱ_A is equivalent to the constraint ∑_j=x,y,z a_j^2 ≤ 1 <cit.>. The parametrisation in Eq. (<ref>) enables us to visualise the state as a point within a unit sphere in a three-dimensional space with coordinates a = (a_x, a_y, a_z). It is called the Bloch sphere and has the property that a pure state corresponds to a point on the surface of the sphere, while a mixed state corresponds to a point inside. It is essential to note that the length of its radius, denoted as L(ϱ) = ∑_j=x,y,z a_j^2, corresponds to the purity 𝒫(ϱ)=(ϱ^2), which remains invariant under unitary rotations. That is, 0 ≤ L(ϱ) = L(Uϱ U^†) ≤ 1, where the first and second inequalities are respectively saturated by the completely mixed state and pure states. The decomposition in terms of Pauli operators {1_2, σ_x, σ_y,σ_z} Eq. (<ref>) is based on the relation (σ_μσ_ν) = 2 δ_μν for μ,ν = 0,1,2,3. In the same way, a tensor product of Pauli operators forms a basis for composite quantum states. For example, we can represent a two-qubit state ϱ_AB in the Bloch form ϱ_AB = 1/4( 1_2^A ⊗1_2^B + ∑_j=x,y,z a_j σ_j^A ⊗1_2^B + ∑_j=x,y,z b_j 1_2^A ⊗σ_j^B + ∑_j,k=x,y,z T_jkσ_i^A ⊗σ_k^B ), with σ_j^A∈ℋ_2^A and σ_j^B∈ℋ_2^B for j = x,y,z. The coefficients a_j and b_j describe the reduced states, whereas the so-called correlation tensor T = (T_jk) captures two-body quantum correlations. Here, the positivity of ϱ_AB implies several non-trivial constraints on the possible values of {a_j, b_j, T_jk}. Thus in general it is difficult to find the complete set of values satisfying these constraints. The simplest example is given by ∑_j=x,y,z (a_j^2 + b_j^2) +∑_j,k=x,y,z T_jk^2 ≤ 3 from the purity condition (ϱ_AB^2) ≤ 1, for details see Refs. <cit.>. Notice that Eq. (<ref>) can be written as ℛ^(2)(ϱ_AB) = ∑_j,k=x,y,z T_jk^2. Thus the sufficient criterion for two-qubit entanglement reads: if ∑_j,k=x,y,z T_jk^2 > 1, then the state is entangled. The Bloch decomposition can be generalised to n-particle d-dimensional quantum states (n-qudits) with ϱ = 1/d^n∑_j_1, ⋯, j_n=0^d^2-1 T_j_1⋯ j_nλ_j_1⊗⋯⊗λ_j_n, where λ_0 is the identity 1_d, and λ_j are the normalised Gell-Mann matrices, such that λ_j=λ_j^†, tr[λ_jλ_k]=d δ_j,k, and tr[λ_j]=0 for j > 0 <cit.>. The correlation tensor T_j_1⋯ j_n was essentially first considered by Schlienz and Mahler in Ref. <cit.>. Note that some references use different normalisation of the λ_j or yet different bases such as, for example, the Heisenberg-Weyl matrices <cit.>. We remark that the k-fold tensor T_j_1⋯ j_n for 1 ≤ k ≤ n, i.e. the entries, for which k indices are non-zero, characterises the k-body correlations of the (reduced) state. §.§ Sector lengths The Bloch representation directly leads to the notion of so-called sector lengths. As mentioned, the length of the one-party Bloch vector quantifies the degree of mixing of the state. Accordingly, the length encodes information about the state that can be obtained in a basis-independent way. The sector lengths are its direct extension to multipartite quantum systems. To proceed, recall the generalised Bloch decomposition of a n-qudit state in Eq. (<ref>). Sector lengths are defined as follows <cit.>: S_k(ϱ) = ∑_k non-zero indices T_j_1⋯ j_n^2, where S_0=λ_0 ⋯ 0=1 due to the normalisation condition (ϱ)=1. The sector lengths S_k quantify the amount of k-body correlations in the state ϱ. For example, in the case of the three-qubit GHZ state |GHZ_3⟩ = (|000⟩+|111⟩)/√(2), one obtains (S_1, S_2, S_3) = (0,3,4). Additionally, note that Eq.( <ref>) along with ( <ref>) give an entanglement criterion in terms of sector length. Sector lengths have several useful properties. (i) The sector lengths are invariant under any local unitary transformation. That is, for a local unitary V_1⊗⋯⊗ V_N, it holds that S_k(ϱ)=S_k(V_1⊗⋯⊗ V_nϱ V_1^†⊗⋯⊗ V_n^†). (ii) The sector lengths are convex on quantum states. That is, for the mixed quantum state ϱ=∑_i p_i |ψ_i⟩⟨ψ_i|, it holds that S_k (∑_i p_i |ψ_i⟩⟨ψ_i|) ≤∑_i p_i S_k(|ψ_i⟩). (iii) The sector lengths have a convolution property: For a n-particle product state ϱ_P ⊗ϱ_Q, where ϱ_P and ϱ_Q are, respectively, j-particle and (n-j)-particle states, we have S_k(ϱ_P ⊗ϱ_Q) = ∑_i=0^k S_i(ϱ_P) S_k-i(ϱ_Q) <cit.>. (iv) The sector lengths are directly associated with the purity of ϱ, namely 𝒫(ϱ) = 1/d^n∑_k=0^nS_k(ϱ). That is, the purity can be decomposed into the sector lengths of different orders. Using this relation, the sector lengths can be always represented as the purities of reduced states of ϱ, and vice versa. (v) The n-body (often called full-body) sector length S_n for all n-qubit states has been shown to be always maximised by the n-qubit GHZ state, denoted by |GHZ_n⟩=(|0⟩^⊗ n + |1⟩^⊗ n)/√(2). Its maximal value is given by S_n(GHZ_n) = 2^n-1 + δ_n, even <cit.>. However, this is not always true in higher dimensions, i.e. quantum states that are not of the GHZ form can attain the maximal S_n value <cit.>. Even more interestingly, it has been demonstrated that there exist multipartite entangled states with zero S_n for an odd number of qubits <cit.>. Finally and importantly, the sector lengths can be directly obtained from the randomised measurement scheme. In fact, the k-body sector lengths S_k can be represented as averages over all second-order moments of random correlations in k-particle subsystems. The entanglement criteria using the sector lengths are therefore accessible with randomised measurements, for details see Sec. <ref>. §.§ Local unitary invariants Moments of random correlations and sector lengths are special cases of a broader class of the quantum state functions which are invariant under local unitary transformations. In general, such local unitary (LU) invariants q(ϱ) are functions of the quantum state ϱ for which q(V_1⊗⋯⊗ V_nϱ V_1^†⊗⋯⊗ V_n^†) = q(ϱ), for any V_1⊗⋯⊗ V_n with V_i defined in a d-dimensional unitary group. Since the purity of the global state 𝒫(ϱ) = (ϱ^2) is invariant under any global unitary, one can interpret the relation in Eq. (<ref>) as a decomposition of a global unitary invariant into LU invariants. Moreover, for two qubits (ϱ^3), can be expressed with the help of the determinant (T) of the correlation tensor introduced in Eq. (<ref>). This is one of the so-called Makhlin LU invariants <cit.>. Here a nontrivial question arises: How can we access certain LU invariants from randomised measurements? Since LU invariants include detailed information about quantum correlations in the state, addressing this question can be related to the improvement of entanglement detection and can reveal many other important properties of the state. In Secs. <ref> and <ref>, we discuss how LU invariants for two qubits can be characterised by randomised measurements. Another example of LU invariant is the Rényi entropy of order α, defined as H_α(ϱ_M )= 1/1-αlog[( ϱ_M^α)], where α∈ℝ, α≠0, α≠1, the reduced state is defined as ϱ_M = _M(ϱ) for any M={1,2,…, n} and the trace is taken over the complement M. In particular, the second-order Rényi entropy H_2 is often used to analyse entanglement <cit.>. This quantity is accessible through randomised measurements, see Secs. <ref> and <ref>. §.§ PT moments A different route to witnessing entanglement via randomised measurements is based on the Peres-Horodecki separability criterion <cit.>. It states that if a bipartite state ϱ_AB is separable, then the partially transposed density matrix ϱ^Γ_B_AB:=(id⊗ T)(ϱ_AB) is positive semi-definite, where id is the identity map and T is the transposition map. States with this property are called PPT states, as they have a positive semidefinite partial transpose. Contrary, if ϱ^Γ_B_AB has negative eigenvalues, the state is called NPT and must be entangled. Importantly, for systems consisting of two qubits or a qubit and a qutrit a positive semi-definite ϱ^Γ_B_AB is also a sufficient criterion for separability. In general, however, there exist entangled states that the PPT criterion can never detect. The criterion can also be used to quantify entanglement and the corresponding entanglement monotone is provided by the logarithmic negativity defined as <cit.> E_N(ϱ_AB)=log ||ϱ_AB^Γ_B||_1=log∑_i |λ_i|, with || · ||_1 being the trace norm and λ_i the eigenvalues of the partially transposed density matrix. In order to make the PPT criterion accessible by randomised measurements one considers the so-called PT (or negativity) moments. The k-th PT moment is defined as p_k(ϱ_AB) = [ ( ϱ_AB^Γ_B)^k ]. These quantities are LU invariant for any order k, since the eigenvalues of the partially transposed matrix are LU invariant. Similarly to the moments of the given density matrix [see their use in Eq. (<ref>)], these moments can be determined by randomised measurements <cit.>, as described in Sec. <ref>. Furthermore, it is a well-established mathematical fact that the coefficients of the characteristic polynomial of a matrix can be expressed in terms of traces of the power of this matrix <cit.>, so knowledge of the moments p_k for any k allows to evaluate the PPT criterion. In practice, however, only a few of these moments can be measured and the question arises: Is this data compatible with a PPT density matrix or not? This question is similar to security analysis in entanglement-based quantum key distribution, where the protocol is insecure, if the measured data is compatible with a separable state <cit.>. In Ref. <cit.> it was shown via a machine-learning approach that the logarithmic negativity can be estimated using p_3. As an analytical result, the following moment-based entanglement criterion was introduced in Ref. <cit.>, p_3 < p_2^2 This so-called p_3-PPT criterion was utilised to detect entanglement in the experimental data from Ref. <cit.>. It is worth noting that the PT-moment approach, even with lower orders, can detect the Werner state in a necessary and sufficient manner for any dimension, for more details see Appendix in Ref. <cit.>. Still, the p_3-PPT criterion is not the optimal way to extract information from the moments p_2 and p_3. This problem can be solved with a family of optimal criteria (p_n-OPPT) derived in Ref. <cit.>, see also Ref. <cit.>. For the special case of n=3, the necessary and sufficient p_3-OPPT condition for compatibility of the PT moments with a PPT state is given by p_3 ≥α x^3+(1-α x)^3, where α=⌊1/p_2 ⌋ and x=[α+(α [p_2(α+1)-1])^1/2]/[α (α +1)]. Note that the above expression does not depend on the Hilbert space dimension. Also, if the PT moments are compatible with the spectrum of a PPT state, they are compatible with a separable state, since one can directly write down a separable state (diagonal in the computational basis) for a given nonnegative spectrum of the partial transpose. Finally, the p_n-OPPT criteria are defined for all n ≥ 3 and demonstratively stronger than their p_n-PPT counterparts as shown with numerical simulations <cit.>. §.§ Bell-type inequalities Another topic where randomised measurements are highly useful tools is tests of non-local correlations in quantum systems. One of the most fundamental properties of quantum mechanics is that measurement results at spatially separated measurement sites exhibit correlations that do not permit a classical description. As shown by Bell's theorem <cit.> such correlations can only be explained if certain fundamental assumptions about the physical world are given up. These include relativistic causality, the possibility to choose measurement settings independently of the experimental results or the ability to causally explain the occurrence of the outcomes altogether, sometimes also referred to as giving up “realism”, for a careful analysis see <cit.>. Apart from such basic questions, this class of correlations is also an important resource used in numerous quantum information processing protocols, in particular in quantum key distribution <cit.>, in the certified generation of unpredictable randomness <cit.>, and in reducing the communication complexity of computation <cit.>. These unique properties of quantum systems are sometimes called “quantum nonlocality” or “Bell non-locality” in the literature <cit.>. Whether a given state produces Bell non-locality is usually tested via inequalities which give bounds on functions of expectation values for joint measurements at spatially separated sites sharing an entangled state <cit.>. The simplest of these, the Clauser-Horne-Shimony-Holt (CHSH) inequality <cit.> applies to the scenario of two observers who share an entangled state of two qubits. They perform dichotomic measurements, with the first observer choosing between two alternative observables 𝐮_1 ·σ and 𝐮_1' ·σ, and the second observer between 𝐮_2 ·σ and 𝐮_2' ·σ. It can be proven that any Bell-local model <cit.>, i.e. any model respecting all assumptions of Bell's theorem, satisfies the inequality |E(𝐮_1,𝐮_2)+E(𝐮_1,𝐮_2')+E(𝐮_1',𝐮_2)-E(𝐮_1',𝐮_2')| ≤ 2. For a suitable maximally entangled state and an optimal choice of observables as for example 𝐮_1 = 𝐱, 𝐮_1' = 𝐳, and 𝐮_2 = (𝐱 + 𝐳)/√(2), 𝐮_2' = (𝐱 - 𝐳)/√(2) we obtain the largest value of the left-hand side of the inequality S_max=2√(2) >2 and hence the maximal violation of the inequality. The quantum violations of similar inequalities have been observed in precisely dedicated experiments <cit.>. One can also ask if a Bell-local model exists when the CHSH inequality is not violated. The answer is negative and a necessary and sufficient condition for the existence of such a model is given by a set of inequalities (not just one of them) which describe the facets of so-called Bell-Pitowsky polytope <cit.>. These polytopes are different for scenarios with different numbers of observers, measurement settings and outcomes. It turns out that for two parties, each choosing between two dichotomic measurements, it is sufficient to permute the observables in the CHSH inequality to generate the complete set of 16 CHSH inequalities describing the Bell-Pitowsky polytope. For more complex scenarios the corresponding polytopes have been fully characterised analytically only for special cases <cit.>. The maximum value S of the CHSH expression that a given state ϱ can achieve, optimised over the choice of measurements, is given by S(ϱ) = 2 √(λ_1 + λ_2), where λ's are the two largest eigenvalues of the matrix T^⊤ T <cit.> with T defined in Eq. (<ref>). The corresponding state violates the CHSH inequality if and only if √(λ_1 + λ_2) > 1. This condition can also be expressed directly in terms of correlation matrix elements <cit.> as ∑_i,j=x,y T_ij^2 > 1, where the axes x and y define the plane in which the optimal settings for the inequality lie. While, in general, a particular choice of settings is crucial to obtain the violation of Bell-type inequalities such as the CHSH inequality, it is interesting to investigate whether Bell non-local correlations can also be witnessed in the scenario of randomised measurements. Interestingly, it turns out that with suitable states a violation can still be guaranteed, even if certain fixed random rotations are added between the reference frames of the two observers or even with randomness in the local frames. A detailed discussion of this topic will be presented in  Secs. <ref> and <ref>. § ENTANGLEMENT In this section, we summarise several results on detecting entanglement using randomised measurements. We begin with an introduction to the theory of entanglement and the general framework of randomised measurements focusing on the t-th moment of the distributions of correlation values in multipartite high-dimensional systems. We also provide an overview of quantum t-designs as a powerful tool for the computation of integrals over Haar randomly distributed unitaries. Subsequently, several applications of these tools are discussed to detect and characterise entanglement in a broad range of scenarios. The section concludes with a discussion of the effects of statistical noise due to limited data in experimental situations and proper strategies to account for this noise when applying the tools presented before. §.§ Multipartite entanglement In the previous section, basic intuitions behind the structure of entanglement and its detection with randomised measurements have been introduced. Here, we discuss entanglement beyond the two-qubit scenario to include systems with an arbitrary dimension and number of parties. The interested reader can find more details about the field of multipartite entanglement in several in-depth review articles <cit.>. An n-partite d-dimensional quantum state (n-qudit) defined in the Hilbert space ℋ_d^⊗ n is fully separable if it can be written as ϱ_fs = ∑_i p_i ϱ_i^1 ⊗ϱ_i^2 ⊗⋯⊗ϱ_i^n, where ϱ_i^j are quantum states and the p_i form a probability distribution, i.e. p_i ≥ 0 and ∑_i p_i = 1. We say that a n-particle state contains entanglement if it is not fully separable. Note, that this does not imply anything about the structure of the entanglement, as for example whether all parties are entangled with each other. One option to intuitively understand different types of entanglement is to consider how states are prepared. For instance, ϱ_fs can be prepared from a product state by local operations and classical communication (LOCC) by operating on each particle separately. One can also consider states which can be prepared from a product state by LOCC where the operations are performed jointly on groups of particles (not just on one particle). For instance, a state is called biseparable with respect to a bipartition M|M for a subset M ⊂{1,2,…, n} if it can be written as ϱ_M|M = ∑_i q_i^M ϱ_i^M ⊗ϱ_i^M, where the q_i^M form a probability distribution, M is the complement of M and ϱ_i^M is a quantum state of particles in set M. In order to prepare state ϱ_M|M via LOCC one needs to operate jointly on subsystems in the set M and in the set M. Moreover, one can consider mixtures of biseparable states for all bipartitions, ϱ_bs = ∑_M p_M ϱ_M|M, where p_M are probabilities and the summation includes at most 2^n-1-1 terms. Such a general state is simply called biseparable (without reference to any concrete bipartition). A quantum state which cannot be written in the form (<ref>) is called genuine n-particle entangled and involves entanglement between all subsystems. For example, a three-particle state is called biseparable for a bipartition A|BC if ϱ_A|BC =∑_i q_i^A ϱ_i^A ⊗ϱ_i^BC, where ϱ_i^BC may be entangled. We can furthermore construct mixtures of biseparable states with respect to different partitions, i.e. states of the form ϱ_bs = p_Aϱ_A|BC + p_Bϱ_B|CA + p_Cϱ_C|AB, where the p_A, p_B, p_C are probabilities. In contrast, a typical example of a genuine n-qudit entangled state is the generalised Greenberger–Horne–Zeilinger (GHZ) state given by |GHZ(n,d)⟩ = 1/√(d)∑_i=0^d-1|i⟩^⊗ n. In particular, in two-qudit systems (that is, n=2), this state is the maximally entangled two-particle state. Other examples of genuine n-partite entangled states include W states <cit.>, Dicke states <cit.>, cluster states <cit.>, graph states <cit.>, and absolutely maximally entangled (AME) states <cit.>. The question of whether a given quantum state is separable or entangled is known as the separability problem and is central for quantum information theory. It has several aspects: * Even if the density matrix is completely known, in general it remains a complicated mathematical problem to determine whether a state is entangled, known to belong to the NP-hard class of computational complexity <cit.>. Following the Choi-Jamiolkowski isomorphism, connecting quantum states and channels <cit.>, the separability problem is equivalent to the problem of distinguishing positive and completely positive maps which is as yet unsolved. * In experiments, sometimes only partial information about the state is accessible. If some a priori information about the state is available, e.g. that an experiment is aimed at producing a certain entangled state, then so-called entanglement witnesses may allow for the efficient detection using directly measurable observables <cit.>. In other situations, where one cannot be sure about the appropriate description of measurements and cannot trust the underlying quantum devices, it is still possible to certify entanglement in a device-independent manner <cit.>, using, e.g. Bell-type inequalities, based only on the measurement data observed from input-output statistics <cit.>. Moreover, when considering ensembles of quantum particles, such as cold atoms, individual control over local subsystems may be lost, but entanglement can still be characterised by measuring collective angular momenta and applying spin squeezing inequalities <cit.>. * Addressing the separability problem can highlight distinctions between quantum physics and classical physics in terms of correlations. The features of entanglement, such as the negativity of conditional entropy <cit.>, monogamy of entanglement <cit.>, and the presence of bound entanglement <cit.>, are associated with entanglement conditions from fundamental and operational viewpoints. In fact, whether a given entangled state is useful or not, can be decided by certain thresholds in terms of several quantum communication protocols <cit.> and quantum metrology <cit.>. * As a generalisation of the separability problem, one can ask, for example, how many partitions are separated in a multipartite state based on the concept of k-separability <cit.> (see Sec. <ref>), or how many particles are entangled based on the concept of k-producibility <cit.>. Other interesting concepts are given by k-stretchability <cit.>, tensor rank <cit.>, and the bipartite and multipartite dimensionality <cit.> of entanglement. Genuine multipartite entanglement can in turn again be classified into several types, such as the W class or GHZ class of states, for details see Sec. <ref>. More recently, also different notions of network entanglement came into the focus of attention <cit.>. §.§ Randomised measurements on multipartite systems While in Section <ref> we have mainly discussed the second moment of correlations obtained via randomised measurements on two qubits, in the following we generalise this scheme to t-th moments in n-particle d-dimensional quantum systems. Using this formulation, we will review several systematic methods to detect various types of entanglement. When measuring an observable ℳ on a state with n parties, ϱ∈ℋ_d^⊗ n, such that each party rotates their measurement direction in an arbitrary manner according to a randomly chosen unitary matrix U_i, the corresponding correlation function reads E(ℳ, U_1 ⊗ U_2 ⊗⋯⊗ U_n) = [ϱ (U_1 ⊗ U_2 ⊗⋯⊗ U_n)^†ℳ (U_1 ⊗ U_2 ⊗⋯⊗ U_n) ]. This correlation function depends not only on ϱ and ℳ but also on the choice of local unitaries U_1 ⊗ U_2 ⊗⋯⊗ U_n. Here the unitary matrix U_i is defined in the d-dimensional unitary group acting on the i-th subsystem for i=1,2,…, n. By sampling random unitaries uniformly from the unitary group, the resulting distribution of the correlation functions can be characterised by its moments with ℛ^(t)_ℳ(ϱ) =N_n,d,t∫dμ(U_1) ∫dμ(U_2) ⋯∫dμ(U_n) [E(ℳ, U_1 ⊗ U_2 ⊗⋯⊗ U_n)]^t, where the integral is taken according to the Haar measure dμ(U). Here, we denote N_n,d,t as a suitable normalisation constant, which is defined differently across the literature. For the case of n=d=2 we arrive at the form of Eq. (<ref>) independently of which Pauli product observable ℳ = σ_i ⊗σ_j with i,j=x,y,z is chosen. In the same manner, without loss of generality the observable can be assumed to be σ_z ⊗σ_z ⊗…σ_z also for larger n. To explain the properties of the moments, let us recap the notion of the Haar unitary measure. Let 𝒰(d) be the group of all d × d unitaries and f(U) be a function on U ∈𝒰(d). Consider an integral of f(U) over the unitary group 𝒰(d) with respect to the Haar measure dμ(U). One of the most important properties of the Haar measure is the left and right invariance under shifts via multiplication by a unitary V ∈𝒰(d), which is respectively given by ∫dμ(U) f(U) = ∫dμ(U) f(VU) = ∫dμ(U) f(UV), see Refs. <cit.> for further details. A general parametrization of the unitary group 𝒰(d) and the associated Haar measure are known <cit.>. For instance, any single-qubit unitary (d=2) can be written in the Euler angle representation <cit.> as U(α, β, γ) = U_z(α)U_y(β)U_z(γ), where U_i(θ) = e^-iθσ_i/2 for i=y,z and the Haar measure dμ(U) = sin(β) dαdβdγ. By their definition, the moments are invariant under any local unitary transformation. More precisely, since the Haar measure is invariant under left and right translation, it holds that ℛ^(t)_ℳ(ϱ) = ℛ^(t)_ℳ (V_1⊗⋯⊗ V_n ϱ V_1^†⊗⋯⊗ V_n^†), for any local unitary V_1⊗⋯⊗ V_n. Thus, we can characterise the state ϱ with the moments ℛ^(t)(ϱ) in a local-basis-independent manner, that is, independent of reference frames between parties or unknown local unitary noise. This invariance is one of the most important properties of randomised measurements and suggests that the moments of the measured distributions contain essential information about the entanglement of the corresponding quantum states. In general, the observable ℳ does not necessarily have to be a product observable of the form ℳ_P = M_1 ⊗ M_2 ⊗⋯⊗ M_n, but can be of the more general form ℳ_NP = ∑_i m_i M_1^i ⊗ M_2^i ⊗⋯⊗ M_n^i, with real coefficients m_i <cit.>. The measurement of non-product observables requires a certain restriction of randomness, where the unitary cannot change significantly while the various observers switch between the particular local observables M^i_k in a synchronised manner. However, as a tradeoff, it enables the extraction of additional information not accessible via product observables as discussed in Sec. <ref> and also Secs.<ref> and <ref>. By discarding the measurements of some parties, one can obtain the marginal moments of the reduced states of ϱ. For illustration, let us consider a three-particle state ϱ_ABC and discard the measurements of the parties B and C, that is, M_B=M_C=1. This yields the corresponding one-body marginal moments ℛ^(t)_A(ϱ_A) of the party A, while on the other hand, the case of M_C=1 yields the two-body marginal moments ℛ^(t)_AB(ϱ_AB) of the parties A and B. Here, ϱ_A, ϱ_AB are the one and two-body reduced states of ϱ_ABC, respectively. Clearly, in the case with M_X ≠1 for X=A,B,C, the full three-body moments ℛ_ABC^(t) (ϱ_ABC) are available. Similarly, all the k-body moments for k ∈ [1, n] can be accessed by discarding the corresponding measurements of (n-k) parties. In particular, the averaging over all second-order k-body moments with product observables yields the k-body sector length S_k, discussed in Sec. <ref>. Moreover, if higher-order moments are additionally considered, detailed information may be extracted, allowing more powerful entanglement detection schemes. On a more technical level, however, this requires at least two additional steps. First since in general the moments depend on the choice of observable, the definition of the moments is based on finding suitable families of observables with equal spectra, i.e. local unitary equivalent observables. In fact, in the case with t=2, the moments are independent of the choice of measurement observables as long as the observables are traceless <cit.>, which, in general, is not the case <cit.>. The next step is to find entanglement criteria using the evaluated higher-order moments. Intuitively, one can power up entanglement detection by combining, e.g. ℛ^(2) and ℛ^(4), rather than using solely ℛ^(2). For this purpose, one should systematically search for the most effective combination of such nonlinear functions. Addressing the above questions is nontrivial and will be considered in more detail in Secs. <ref> and <ref>. For the qubit case, d=2, the Haar unitary integrals can be replaced by integrals with respect to the uniform measure on the Bloch sphere S^2, ∫dμ(U) →1/4π∫_S^2d𝐮 with d𝐮= sin(θ) dθdϕ. With the help of quantum designs, one may simplify the integrals as sums over certain directions on the Bloch sphere. §.§ Quantum t-designs In general, quantities which are at least approximately accessible by randomised measurements correspond to integrals over the space of unitary rotations. This has two potential drawbacks. For once, they require a large amount of sampled measurement directions to be approximated well, see e.g. <cit.> and secondly, the integral form is cumbersome for some analytical derivations and proofs. Quantum designs represent a powerful tool to address both issues by replacing the integration over the full space with the average over several particular points only. In the following, we will give an overview of the concept of designs both from a mathematical and a physical perspective and show how they can be applied in the context of randomised measurements. §.§.§ Spherical t-designs Historically, quantum t-designs were discussed by analogy with classical t-designs in combinatorial mathematics. Their basic idea is the following. Let us consider a real quadratic function f_2(x) for a variable x and take an integral in the interval from a to b. According to the rule found by Thomas Simpson in the 18th century, it holds that the integral for the quadratic function can be exactly evaluated as a simple expression using only three points, namely ∫_a^b dx f_2(x) = b-a/6[ f_2(a)+4f_2(a+b/2)+f_2(b) ]. An extension of Simpson's rule to a greater number of points is possible, which is called the Gauss-Christoffel quadrature rule. A spherical t-design can be seen as a generalisation of Simpson's rule for the efficient computation of integrals of certain polynomials over some spheres <cit.>. In fact, a spherical t-design has already been used in Sec. <ref> to simply evaluate the moments ℛ^(2). Let 𝒮_n-1 be the n-dimensional real unit sphere and let X={x: x∈𝒮_n-1} be a finite set of points on it with the number of elements K = |X|. We call this set a spherical t-design if 1/K∑_x∈ Xf_t(x) = ∫dμ(x) f_t(x), for any homogeneous polynomial function f_t(x) in n variables with degree t, where dμ(x) is the spherical measure in n dimensions. The spherical design property ensures that integrals over the entire sphere can be efficiently computed by taking the average over the set of only K different points. Clearly, any spherical t-design is also a spherical (t-1)-design and it can be shown spherical t-designs exist for any positive integer t and n <cit.>, although they may be difficult to construct explicitly <cit.>. Furthermore, as expected, if a design for a higher degree t is considered, then a larger number of points K is needed. §.§.§ Complex projective t-designs Complex projective t-designs (or quantum spherical t-designs) are a generalisation of spherical designs to a complex vector space <cit.>. As such they allow for example to evaluate expressions based on a random sampling of quantum states. A finite set of unit vectors D = {|ψ_i⟩: |ψ_i⟩∈ C𝒮_d-1}_i=1^K defined on a d-dimensional sphere C𝒮_d-1 in the complex vector space, forms a complex projective t-design if 1/K∑_|ψ_i⟩∈ D P_t(ψ_i) = ∫dμ(ψ) P_t(ψ), for any homogeneous polynomial function P_t in 2d variables with degree t (that is, d variables with degree t and their complex conjugates with degree t), where dμ(ψ) is the spherical measure on the complex unit sphere C𝒮_d-1. It is here important to note that C𝒮_d-1 is isomorphic to the d-dimensional projective Hilbert space denoted as P(ℋ_d), where complex unit vectors |x⟩, |y⟩∈ P(ℋ_d) are identified iff |x⟩ = e^iϕ|y⟩ with a real ϕ <cit.>. For example, the Bloch sphere is known as P(ℋ_2), in which a point on the surface of the sphere corresponds to a pure single-qubit state, up to a phase. In this state space, any two states can be distinguished by the so-called Fubini-Study measure, which is invariant under the action of U(1), for details see Refs. <cit.>. Since polynomials of degree t can be written as linear functions on t copies of a state, the definition of complex projective t-designs is equivalent to requiring 1/K∑_|ψ_i⟩∈ D(|ψ_i⟩⟨ψ_i|)^⊗ t = ∫dμ(ψ) (|ψ⟩⟨ψ|)^⊗ t. In this form, this is also called the quantum state t-design and can be considered as an ensemble of states that is indistinguishable from a uniform random ensemble over all states, if one considers t-fold copies of states selected from that ensemble. Since the integral on the right-hand side of Eq. (<ref>) is proportional to the projector onto the symmetric subspace <cit.> (or see Lemma 2.2.2. in Ref. <cit.>), one can simplify this to ∫dμ(ψ) (|ψ⟩⟨ψ|)^⊗ t = P_sym^(t)/d_sym^(t), where P_sym^(t) is the projector onto the permutation symmetric subspace and d_sym^(t) = d+t-1t is its dimension. In particular, for multi-qubit systems (d=2), the symmetric subspace is spanned by the Dicke states {|D_t,m⟩}_m=0^t given by |D_t,m⟩ = 1/√(tm)∑_k π_k (|1⟩^⊗ m⊗|0⟩^⊗ (t-m)), where the summation in ∑_k π_k is over all permutations between the qubits that lead to different terms. A concrete example is the state |D_3,1⟩ = (|001⟩ + |010⟩ + |100⟩)/√(3). Since the dimension of this subspace is t+1, the projector can be written as P_sym^(t) = 1/t+1∑_m=0^t |D_t,m⟩⟨D_t,m|. In order to explain the structure of P_sym^(t) more generally, let us denote by Sym(t) the symmetric group of a degree t on the set {1,2,…, t} and W_π as a permutation operator on ℋ_d^⊗ t representing a permutation π = π(1) …π(t) ∈Sym(t) such that W_π|i_1, …, i_t⟩ = |i_π(1), …, i_π(t)⟩. Then one can write P_sym^(t) = (1/t!) ∑_π∈Sym(t) W_π. Examples for t=1 and t=2 are ∫dμ(ψ) |ψ⟩⟨ψ| = 1_d/d, ∫dμ(ψ) (|ψ⟩⟨ψ|)^⊗ 2 = 1/d(d+1) (1_d^⊗ 2 + S), where S = ∑_i,j|i⟩⟨j|⊗|j⟩⟨i| denotes the SWAP (or flip) operator with S|a⟩⊗|b⟩ = |b⟩⊗|a⟩. Note that Eq. (<ref>) implies relations between the moments (ϱ^m) for any single-qudit state ϱ <cit.>. Furthermore, another equivalent definition of complex projective t-designs is given by the condition 1/K^2∑_ |ψ_i ⟩, |ψ_j ⟩∈ D |⟨ψ_i|ψ_j ⟩|^2t =1/d_sym^(t). The left-hand side is called t-th frame potential. According to the so-called Welch bound <cit.>, it is always greater than or equal to the right-hand side, where the equality is saturated if and only if the set D forms the complex projective t-designs. Let us consider some examples of projective designs. First, a trivial example of a complex projective 1-design is a set of orthonormal basis vectors {|i⟩}_i=1^d, which leads to (1/d)∑_i=1^d|i⟩⟨i|= 1_d/d. Second, a typical example of complex projective 2-designs are so-called mutually unbiased bases (MUBs). A collection {M_k} of orthonormal bases M_k = {|i_k⟩}_i=1^d for a d-dimensional Hilbert space is called mutually unbiased if |⟨ i_k|j_l ⟩|^2 = 1/d, for any i,j with k ≠ l, i.e. the overlap of any pair of vectors from different bases is equal <cit.>. For the case of d=2, a set of MUBs is given by {M_1, M_2, M_3} with M_1 = {|0⟩,|1⟩}, M_2 = {|+⟩,|-⟩}, and M_3 = {|+i⟩,|-i⟩}. Here, the computational bases {|0⟩, |1⟩}, {|±⟩=(|0⟩±|1⟩)/√(2)}, and {|± i⟩=(|0⟩± i|1⟩)/√(2)} are the normalised eigenvectors of σ_z, σ_x, and σ_x σ_z. The size of maximal sets of MUBs for a given dimension d is an open problem and only partial answers are known. In fact, this has been recognised as one of the five most important open problems in quantum information theory <cit.>. It is known that in any dimension d the maximum number of MUBs cannot be more than d+1 <cit.>. In fact, for prime-power dimensions d=p^r, sets of d+1 MUBs can be constructed <cit.>. Furthermore, for the dimensions d=p^2 or d=2^r, an experimental implementation of MUBs is possible <cit.>. The smallest dimension which is not a power of a prime and where the maximal number of MUBs is unknown is d=6 <cit.>. Finally and importantly, any collection of (d+1) MUBs saturates the Welch bound and therefore forms a complex projective 2-design <cit.>. §.§.§ Unitary t-designs In the case of qubits, spherical designs are suited to evaluate integrals over random unitaries of measurement settings as those can be mapped to rotations on the Bloch sphere. For higher dimensional systems, however, such a mapping no longer exists and the randomised scenario can be addressed by general unitary designs. A set of unitaries G = {U_i: U_i ∈𝒰(d)}_i=1^K forms a unitary t-design if 1/K∑_U_i ∈ G P_t(U_i) =∫dμ(U) P_t(U), for any homogeneous polynomial function P_t in 2d^2 variables with degree t (that is, on the elements of unitary matrices in 𝒰(d) with degree t and on their complex conjugates with degree t), where dμ(U) is the Haar unitary measure on 𝒰(d). For details about unitary t-design, see Refs. <cit.>. Similarly to complex projective designs, there are several equivalent definitions of unitary t-designs. One is given by 1/K∑_U_i ∈ G U_i^⊗ t X (U_i^†)^⊗ t =∫dμ(U) U^⊗ t X (U^†)^⊗ t, for any operator X ∈ℋ_d^⊗ t. An important observation here is that if we set {|ψ_i⟩} = {U_i |0⟩}, then Eq. (<ref>) leads to Eq. (<ref>), i.e. any unitary t-design gives rise to a quantum state t-design. The converse is not necessarily true, even if a set of unitaries creates a state design via {|ψ_i⟩} = {U_i |0⟩}, it does not constitute a unitary design. This simply follows from the fact that a relation like {|ψ_i⟩} = {U_i |0⟩} does not determine the U_i in a unique way. In order to determine the evaluated expression in an analogy with Eq. (<ref>), note that the right-hand side in Eq. (<ref>) commutes with all unitaries V^⊗ t for V ∈𝒰(d), due to the left and right invariance of the Haar measure. Then, according to the Schur-Weyl duality, if an operator A ∈ℋ_d^⊗ t obeys [A, V^⊗ t] = 0 for any V ∈𝒰(d), then A can be written in a linear combination of subsystem permutation operators W_π (while the converse statement is also true) <cit.>. Thus, one has ∫dμ(U) U^⊗ t X (U^†)^⊗ t = ∑_π∈Sym(t) x_π W_π, where each of x_π can be found with the help of the so-called Weingarten calculus <cit.>. As an example, we have ∫dμ(U) UXU^† = (X)/d1_d, ∫dμ(U) U^⊗ 2 X (U^†)^⊗ 2 = 1/d^2-1{[(X)-(XS)/d]1_d^⊗ 2 - [(X)/d-(XS)]S }, where S is the SWAP operator. We remark that the left-hand side in Eq. (<ref>) is called a twirling operation and it is a CPTP map. A quantum state obtained from the twirling operation is called a Werner state and is invariant under any U ⊗ U ⊗⋯⊗ U <cit.>. For two particles, states of the form (<ref>) were the first states where it was shown that entanglement does not imply Bell nonlocality <cit.>. For calculations with operators of the form (<ref>) and X = X_1 ⊗ X_2 it is useful to note the so-called SWAP trick: [(X_1 ⊗ X_2)S] = (X_1 X_2). Moreover, the SWAP trick can be generalised using cyclic permutation operators, e.g. for a cyclic permutation operator W_cyc with W_cyc|x_1, x_2, ⋯, x_n⟩ = |x_2, ⋯, x_n, x_1⟩, it holds that [(X_1 ⊗ X_2 ⊗⋯⊗ X_n)W_cyc] = (X_1 X_2 ⋯ X_n), see Refs. <cit.> for details and Refs. <cit.> for its applications. Cases with t=3, 4 are explicitly described in Example 3.27, and Example 3.28 in Ref. <cit.>. For more details, see <cit.>. Moreover, yet another equivalent definition of unitary t-designs is given in Ref. <cit.> 1/K^2∑_U_i, U_j ∈ G|(U_i U_j^†)|^2t = t! for d≥ t, (2t)!/t! (t+1)! for d=2, where the left-hand side is called t-th frame potential and the right-hand side gives its minimal value similar to the Welch bound in complex spherical designs. The frame potential is often employed as a useful measure to quantify the randomness of an ensemble of unitaries in terms of out-of-time-order correlation functions in quantum chaos <cit.>. For the scenario of n-qubit systems, an example of a unitary 1-design is the Pauli group 𝒫_n, the group of all n-fold tensor products of single-qubit Pauli matrices {1_2, σ_x, σ_y, σ_z}. This group does not form a unitary 2-design <cit.>, but note that we used Pauli measurements in Sec. <ref> as a form of a spherical design. In contrast, the Clifford group 𝒞_n, a group of unitaries with the property C∈𝒞_n if CPC^†∈𝒫_n for any P ∈𝒫_n, is known to be a unitary 2-design in this scenario. Furthermore, it has been shown that the Clifford group also forms a unitary 3-design, but not a unitary 4-design <cit.>. §.§.§ Applications to randomised measurements Finally, we show the usefulness of unitary designs in the scheme of randomised measurements. For the sake of simplicity, we focus on a three-qudit state ϱ_ABC and consider how to obtain its full-body sector length S_3 from the unitary two-design. Note that one can straightforwardly generalise this approach to the sector lengths S_k of a n-qudit state for any 1≤ k ≤ n. Let us consider the product observable ℳ = λ_a ⊗λ_b ⊗λ_c in the second-order moment in Eq. (<ref>), for any choice of Gell-Mann matrices with a,b,c=1,…,d^2-1. Substituting the generalised Bloch decomposition of ϱ_ABC in Eq. (<ref>) into the second-order moment, one can find ℛ^(2)_ℳ(ϱ_ABC) = N_3,d,2/d^6∑_j_A,j_B,j_C=1^d^2-1∑_k_A,k_B,k_C=1^d^2-1 T_j_A j_B j_C T_k_A k_B k_C[(A_jk⊗B_jk⊗C_jk) (λ_a^⊗ 2⊗λ_b^⊗ 2⊗λ_c^⊗ 2 )], where we used that [(M)]^k = (M^⊗ k) for any matrix M and integer k and we denoted the twirling result as X_jk = ∫dμ(U_X) U_X^⊗ 2 (λ_j_X⊗λ_k_X)(U_X^†)^⊗ 2 for X=A,B,C. Now X_jk can be simply evaluated using the formula in Eq. (<ref>) and be expressed as: X_jk = δ_j_X k_X(dS-1_d^⊗ 2)/(d^2-1), where we employed the SWAP trick mentioned above and the properties of the Gell-Mann matrices (λ_j) = 0 and (λ_j λ_k)=dδ_jk. As the last step, by inserting this form into the second moment in Eq. (<ref>) and choosing the normalisation constant as N_3,d,2 = (d^2-1)^3, one can have that ℛ^(2)_ℳ = S_3. An important lesson from this result is that randomised measurements of second order are an indirect implication of the SWAP operator. In higher-order cases, the permutation operators W_π will emerge according to the Schur-Weyl duality in Eq. (<ref>). This will play an important role in estimating the purity of a state, the overlap between two states, and PT moments, for details see Secs. <ref> and <ref>. §.§ Criteria for n-qubit entanglement In Sec. <ref>, we discussed the entanglement detection for a two-qubit state based on the second moment from randomised measurements. This can be generalised to the case of n parties, where the correlation function of ϱ is a straightforward generalisation of Eq. (<ref>), namely E(𝐮_1, …, 𝐮_n) = ⟨ r_1 … r_n ⟩ = (ϱ 𝐮_1 ·σ⊗…⊗𝐮_n ·σ). This is a special case of Eq. (<ref>) with d=2 and the product observable ℳ_P = σ_z ⊗⋯⊗σ_z, where we denote the randomised Pauli matrix as 𝐮·σ = Uσ_z U^†. Choosing the normalisation constant N_n,2,2 in Eq. (<ref>) as 3^n we can write the second moment as ℛ^(2) = ( 3/4 π)^n ∫d𝐮_1 …∫d𝐮_n E^2(𝐮_1, …, 𝐮_n). Similarly to Sec. <ref>, this integral can be simply evaluated using spherical 2-designs ℛ^(2)= ∑_j_1, … ,j_n=1,2,3[ϱ (σ_j_1⊗⋯⊗σ_j_n)]^2. Note that this quantity coincides with the full-body sector length S_n introduced in Sec. <ref>. With this expression, one can analytically find an entanglement criterion. In fact, since the second moment ℛ^(2) (that is, the sector length S_n) is convex in a state and invariant under LU transformations, the maximal value over n-qubit fully separable states is, without the loss of generality, achieved by a pure product state |0⟩^⊗ n. This immediately yields the entanglement criterion <cit.> ℛ^(2) > 1 ⇒ϱ is entangled. An equivalent criterion is that if the sector length S_n exceeds 1, then the n-qubit state ϱ is entangled, and similar criteria have been presented Refs. <cit.>. In Sec. <ref>, this inequality will be extended to detect high-dimensional entanglement based on second moments. The original result presented in (<ref>) was derived without the notion of spherical t-designs <cit.>. Moreover, this condition was shown to be necessary and sufficient for entanglement in pure states. The sufficient criterion for mixed states in terms of ℛ^(2) can still be formulated for any d, where the moment is invariant under not only local unitaries but also the choice of local operator basis, e.g. Gell-Mann or Heisenberg-Weyl matrices. Given that ℛ^(2) faithfully captures whether a state is entangled or not, it is natural to ask if it is an entanglement monotone <cit.>. This question is relevant even for pure states where one asks whether state |ψ⟩, endowed with ℛ^(2)(ψ) ≥ℛ^(2)(ϕ), can be converted via LOCC to another state |ϕ⟩. It turns out that such conversions are possible for bipartite systems in any dimensions, where the proof utilises Nielsen's majorisation criterion <cit.>, but there exist multipartite quantum states which can be converted via LOCC to states with larger ℛ^(2) <cit.>. Therefore, in general, the second moment ℛ^(2) is not an entanglement monotone. As a counterexample, consider the state |ψ⟩ = (|0⟩|ψ^-⟩ + |1⟩|ψ^+⟩)/ √(2), where |ψ^±⟩ = (|01⟩±|10⟩) / √(2) are the Bell states whose only non-zero correlation functions read T_zxx = T_zyy = -1, leading to S_3(ψ) = 2. Now measure the first qubit in the computational basis, do nothing if the outcome is “0” and apply σ_z to the second qubit if the result was “1”. In both cases, one deterministically ends up in the pure state |ϕ⟩ = |0⟩|ψ^-⟩ with the increased S_3(ϕ) = 3. §.§ Specialised criteria for two-qubit entanglement In the case of mixed states of two qubits, ℛ^(2) no longer provides a necessary and sufficient criterion for entanglement and higher-order moments may be used for improved criteria. Here, we discuss two approaches specific to this scenario, where the first one is motivated by considering states in the Bell-diagonal form and the second one represents a refined method to access the PPT criterion using more complex LU-invariant quantities, partially based on non-product observables. Additionally, also moments of the state itself are presented as a useful resource in this scenario. §.§.§ Bell-diagonal states As the name suggests, Bell-diagonal states of two qubits ϱ_BD can be represented as a mixture of the four Bell states. In terms of Pauli matrices, they have the form <cit.> ϱ_BD = 1/4(1_2^⊗ 2 + ∑_j=x,y,z T_jjσ_j ⊗σ_j ), and any state with a diagonal correlation matrix and maximally mixed marginals is Bell-diagonal. Since a Bell-diagonal state is parameterised only by the three parameters (T_xx, T_yy, T_zz), it allows for a much simpler analytical treatment than general states. For instance, the PPT criterion as the necessary and sufficient condition for two-qubit separability can be rewritten as ∑_j=x,y,z |T_jj| ≤ 1  <cit.>. Additionally and crucially, any two-qubit state ϱ_AB can be mapped to a Bell-diagonal state by local operations which conserve the values of the moments ℛ^(t) and do not increase the amount of entanglement present in the state <cit.>. Thus, any criterion derived for a Bell-diagonal state that is based solely on these moments is also valid for arbitrary states. The mapping from a general state to a Bell-diagonal state proceeds as follows. The four Bell states are eigenstates of the two observables g_1 = σ_x ⊗σ_x and g_2 = σ_z ⊗σ_z with all the four possible combinations of eigenvalues ± 1. The map ϱ↦ (ϱ + g_i ϱ g_i)/2 amounts to applying the local unitary transformation ϱ↦ g_i ϱ g_i with probability 1/2. If a general state is expressed in the basis of Bell states as ϱ = ∑_ijα_ij |BS_i ⟩⟨ BS_j|, then applying the above map for g_1 and g_2 removes all the off-diagonal terms, since for at least one g_i the Bell states |BS_i⟩ and |BS_j ⟩ have a different eigenvalue, so ϱ is mapped to the Bell-diagonal state ϱ_BD = ∑_iα_ii |BS_i ⟩⟨ BS_i|. This kind of depolarization is not specific to Bell states, it can be applied to several other multipartite scenarios, like GHZ-symmetric or graph-diagonal state <cit.>. For a Bell-diagonal state ϱ_BD, the second and fourth moments of the product observable ℳ_P=σ_z ⊗σ_z are given by <cit.> ℛ^(2) =1/9∑_j=x,y,z T_jj^2, ℛ^(4) =2/75∑_j=x,y,z T_jj^4 + 27/25[ℛ^(2)]^2, respectively. Now, based on the separability constraint ∑_j=x,y,z |T_jj| ≤ 1, for a given value of ℛ^(2) one can maximise and minimise analytically the value of ℛ^(4) for separable states. This leads to a separability region in the parameter space spanned by ℛ^(2) and ℛ^(4). This approach allows to detect entanglement that cannot be detected by the second moment itself, which is illustrated in Fig. <ref>. Moreover, using additionally the sixth moment, a necessary and sufficient condition for entanglement of two-qubit Bell diagonal states can be found, see the Appendix in Ref. <cit.>. §.§.§ Evaluating the PPT criterion for two qubits For general two-qubit states, one can consider the randomised measurement scheme with non-product observables ℳ_NP. In fact, this scheme then allows for the complete characterisation of two-qubit entangled states. First, to detect two-qubit entanglement completely, we need to access the PPT criterion in the randomised measurement scheme. The PPT condition ϱ_AB^Γ_B≥ 0 for a two-qubit state ϱ_AB, discussed in Sec. <ref>, is equivalent to det(ϱ_AB^Γ_B) ≥ 0, since only one eigenvalue of ϱ_AB^Γ_B becomes negative if the state is entangled. In Ref. <cit.>, it has been found that det(ϱ_AB^Γ_B) can be expressed as (ρ^Γ_B) = 1/24 (1-6p_4+8p_3+3p_2^2-6p_2), where p_2 = (1+x_1)/4, p_3 = (1+3x_1+6x_2)/16, p_4 = (1+6x_1+24x_2+x_1^2+2x_3+4x_4)/64, with x_1 = I_2+I_4+I_7, x_2 = I_1+I_12 , x_3 = I_2^2 -I_3, x_4 = I_5+I_8+I_14+I_4 I_7. Here I_k are some of the Makhlin invariants <cit.>, which form a complete family of invariants under local unitaries. In Ref. <cit.>, it was shown that such Makhlin invariants can be accessed by the randomised measurement scheme. For instance, in the case with the product observable ℳ_P = σ_z ⊗σ_z, one has ℛ_ℳ_P^(2) = I_2 = ∑_j,k=x,y,zT_jk^2, where T = (T_jk) is the correlation matrix given in Eq. (<ref>) and I_2 corresponds to the sector length S_2. It is important to note here that while the third moment for local observables on qubits vanishes, that is ℛ_ℳ_P^(3) = 0, this is not the case for the non-product observable ℳ_NP = ∑_j=1^3 σ_j ⊗σ_j. Indeed, one can obtain ℛ_ℳ_NP^(3) = I_1 = T. In a similar way, other Makhlin invariants can be obtained via randomised measurements. This result directly implies the possibility of detecting any two-qubit entanglement. Further details about the Makhlin invariants are discussed in Sec. <ref>. §.§.§ State moments for entanglement detection The methods discussed so far were a direct implementation of the statistical moments ℛ^(t), i.e. of the moments of the distribution of correlation values. However, also the moments of the state itself can be used to derive certain separability bounds and detect entanglement. These quantities are LU invariant and hence a perfect candidate for the randomised measurement schemes. Lawson et al. have experimentally demonstrated the usefulness of the higher-order state moments for entanglement detection <cit.> in that matter. For two-qubit density matrices, they use a Pauli decomposition of the t-th power of the density matrix, ϱ^t, to define polynomials of correlation matrix elements, denoted as Q_t. In particular, for t=2, the expression directly resembles what we call the sector length, i.e. Q_2 = S_2 =ℛ^(2). For Q_3, the authors consider Q_3=⟨σ_xσ_z⟩⟨σ_yσ_y⟩⟨σ_zσ_x⟩ -⟨σ_xσ_y⟩⟨σ_yσ_z⟩⟨σ_zσ_x⟩ -⟨σ_xσ_z⟩⟨σ_yσ_x⟩⟨σ_zσ_y⟩ +⟨σ_xσ_x⟩⟨σ_yσ_z⟩⟨σ_zσ_y⟩ +⟨σ_xσ_y⟩⟨σ_yσ_x⟩⟨σ_zσ_z⟩ -⟨σ_xσ_x⟩⟨σ_yσ_y⟩⟨σ_zσ_z⟩, which depends on third-order terms of two-body correlations, only. Notice that Q_3 is indeed equal to -T = -I_1, which is one of the Makhlin invariants discussed in the previous section. Also, the higher-order expressions Q_4 and Q_5 were defined in Ref. <cit.>, where one can simply write them as Q_4 = I_2^2 - I_3 and Q_5 = - I_1 I_2 for the Makhlin invariants I_2, I_3 that will be defined in Eq. (<ref>) in Sec. <ref>. The authors perform numerical simulations to first establish a lower bound on concurrence which quantifies the entanglement of two-qubit states <cit.> based on the second and higher-order correlation matrix polynomials. As shown in Fig. <ref>, for a given value of Q_2, the concurrence can be bounded from both sides. As observed, only states with higher purities can achieve higher values of Q_2 as visible from the colour encoding (increasing purities from dark blue with 𝒫≤0.5 over green, red and light blue to yellow with 𝒫∈[0.8,0.9]). In a similar fashion, the concurrence of simulated two-qubit states is shown against the normalised Q_3, Q_4 and Q_5 in Fig. <ref>. This strongly suggests that large respective values of the latter lead to tighter bounds on the concurrence. For the experimental demonstration, Lawson et al. use a commercial spontaneous down-conversion source and perform local-unitary rotations on one of the two qubits using waveplates. As the state is assumed to be a maximally entangled |ϕ^-⟩=(|00 ⟩ - |11⟩)/√(2) state, this is formally equivalent to a rotation of both qubits and demonstrates the direct experimental accessibility of the Q_t polynomials, see Fig. <ref>. §.§ Bipartite systems of higher dimensions Although it is not trivial to generalise the methods presented so far to higher dimensional systems, at least for two-qudit states, i.e. the case of n=2 with local dimension d and product observables ℳ = M_A ⊗ M_B, several entanglement criteria can be found. As in the qubit case, a basic approach aims to derive criteria based on the second moment of the distributions of correlations and a more refined approach takes higher orders into account, allowing even to certify weakly entangled states such as bound entanglement. §.§.§ Second moments In the qubit case with d=2, the moments can be easily evaluated by virtue of the concept of spherical designs discussed in Sec. <ref>. This is based on the fact that unitary transformations on qubits can be regarded as orthogonal rotations on the Bloch sphere, due to the connection between SU(2) and SO(3) groups. In the qudit (higher dimensional) case, on the other hand, we lack the connection since the notion of a Bloch sphere is not available. Accordingly, not all possible observables are equivalent under random unitaries. However, as long as they are traceless the second moments do not depend on the choice of observables <cit.>. In fact, one can evaluate the second moments and turn them to the sector lengths. In the following, we denote that ℛ_A^(2) = S_1^A, ℛ_B^(2) = S_1^B, ℛ_A^(2) + ℛ_B^(2) = S_1, and ℛ_AB^(2) = S_2, where S_1 and S_2 are the sector lengths in Eq. (<ref>). In Ref. <cit.>, it has been shown that any two-qudit separable state obeys ℛ_AB^(2)≤ (d-1)^2. In Ref. <cit.>, the entanglement detection has been improved using the marginal moments ℛ_A^(2) and ℛ_B^(2). Any separable state obeys ℛ_AB^(2)≤ d-1 + (d-1) ℛ_A^(2) -ℛ_B^(2). Again, any violation implies that the state is entangled. This detection method was shown to be strictly stronger than the criterion in Eq. (<ref>). The criterion in Eq. (<ref>) is equivalent to the second-order Rényi entropy criterion <cit.> stating that any separable state obeys H_2(ϱ_A) ≤ H_2(ϱ_AB), H_2(ϱ_B) ≤ H_2(ϱ_AB), where H_2(ϱ) was defined in Sec. <ref>. The H_2(ϱ) has been estimated by local randomised measurements in Refs. <cit.>, where an ion-trap quantum simulator was used to perform measurements of the Rényi entropy. We note that the criterion in Eq. (<ref>) was extended to detect the Schmidt number <cit.> and that higher-order Rényi entropy criteria were also presented <cit.>. As a final remark, the violation of Eq. (<ref>) does not detect a weak form of entanglement known as bound entanglement, which cannot be distilled into pure maximally entangled states <cit.> and may not be verified by the PPT criterion <cit.>. §.§.§ Fourth and higher-order moments Several entanglement criteria, the ones based on second moments and, of course, all the ones based on PT moments from randomised measurements fail to detect bound entangled states. In this section, we explain that higher-order moments are able to detect such weakly entangled states. Let us begin by noting again that higher-order moments ℛ^(t) for t>2 from randomised measurements in high dimensions (d>2) depend on the choice of the observable, unlike the case of qubits or second moments in high dimensions. Ref. <cit.> has offered a systematic method to address this problem. The key result is that one can find observables ℳ = M_A ⊗ M_B such that the moments ℛ^(t) coincide with alternative moments as uniform averages over a high-dimensional sphere, the so-called pseudo-Bloch sphere, with 𝒮^(t) = N_d,t∫d𝐮_a ∫d𝐮_b {[ϱ_AB (𝐮_a·λ) ⊗ (𝐮_b ·λ)]}^t. Here, 𝐮_a, 𝐮_b denote (d^2-1)-dimensional unit real vectors uniformly distributed over the pseudo-Bloch sphere, λ = (λ_1, ⋯, λ_d^2-1) is the vector of Gell-Mann matrices, and N_d,t is a suitable normalisation constant. Also, the observables M_A and M_B are defined by a suitable choice of the eigenvalues for the coincidence between ℛ^(t) and 𝒮^(t) <cit.>. It is essential that the moments 𝒮^(t) are invariant not only over all local unitaries but also over all changes of local operator basis λ, meaning the independence of the specific choice of observable. In fact, the moments 𝒮^(t) for t=2,4 for any dimension can be evaluated analytically and are simply expressed as 𝒮^(2) = ∑_i=1^d^2-1τ_i^2, 𝒮^(4) = 2 ∑_i=1^d^2-1τ_i^4 + (𝒮^(2))^2, where a suitable normalisation constant N_d,t is chosen and τ_i are singular values of the two-body correlation matrix T = (T_ij) with T_ij = [ϱ_ABλ_i ⊗λ_j] for i,j=1,…, d^2-1. This results from the fact that the moments 𝒮^(t) are invariant under local orthogonal rotations of the matrix T. Accordingly, in a similar manner to Sec. <ref>, one can consider the space spanned by the moments (𝒮^(2), 𝒮^(4)) and formulate separability criteria in this space. As a suitable constraint for this purpose, the so-called de Vicente criterion proposed in Ref. <cit.> was used. Any two-qudit separable state obeys T_tr = ∑_i=1^d^2-1τ_i ≤ d-1, where ⋯_tr denotes the trace norm, invariant under orthogonal transformations. This results in the set of admissible values (𝒮^(2), 𝒮^(4)) for separable states in any dimension d, which allows for the detection of various bound entangled states. This is illustrated in Fig. <ref>. As further generalisations, the characterisation of the Schmidt number as dimensional entanglement has been discussed using this method in Refs. <cit.>. The above method to detect bound entanglement has been implemented experimentally for two-qutrit chessboard states <cit.>: ϱ_ch =(1/N) ∑_i=1^4|V_i⟩⟨V_i|, which are written as a mixture of four unnormalised eigenstates |V_i⟩. The chessboard state was created by first generating two-qubit polarisation-entangled photon pairs through a spontaneous parametric down-conversion process and subsequently transforming them to two-qutrits |V_i⟩ via dimension-expanding local operations with motorised rotating half-wave plates and quarter-wave plates. The experimentally prepared chessboard state ϱ_ch^exp has a fidelity beyond 98% with the mixture between ϱ_ch and the white noise level p=0.129. For this state, the second and fourth moments (𝒮^(2), 𝒮^(4)) were computed, and its entanglement was verified in Ref. <cit.>. §.§ Multipartite entanglement structure The previous discussion was focused on verifying the presence of entanglement in bipartite quantum systems. In multipartite systems, the structure of entanglement can vary significantly between states culminating in genuine multipartite entanglement (GME). In this Section, we present a series of criteria for the analysis of multipartite entanglement which are all based on functions of both the full as well as the marginal second moments. We also discuss the results of a four-qubit experiment in which one of these criteria is applied to detect several types of entanglement. §.§.§ Full separability The detection of high-dimensional multipartite entanglement was discussed using the k-body sector length S_k. In Refs. <cit.>, it has been shown that any n-qudit fully separable state obeys S_k ≤nk(d-1)^k, where S_k is the k-body sector length. Violation of this bound implies that the state is entangled as can be easily demonstrated, for instance, in graph states. Note that this criterion can be seen as a generalisation of Eq. (<ref>) to sector lengths between a number of observers smaller than n and any dimension. One can also consider linear combinations of various sector lengths as it was shown in Ref. <cit.> that ∑_k=0^n[(d-1)n - dk]S_k ≥ 0 holds for any n-qudit fully separable state. This criterion is strictly stronger (detects more entangled states) than the one in Eq. (<ref>) and can be understood as the n-qudit generalisation of Eq. (<ref>). §.§.§ k-separability In order to introduce the notion of k-separability <cit.>, let us first consider pure states. A n-particle pure state is called k-separable if it can be written as |ψ_k-sep⟩ = |ϕ_1⟩⊗|ϕ_2⟩⊗⋯⊗|ϕ_k⟩. A mixed state is k-separable if it is a convex mixture of pure k-separable states, with different elements in the mixture possibly admitting different partitions into k subsystems. For k=n, this notion is equivalent to the full separability. For example, the following state is 100-qubit 12-separable: |ψ_100, 12⟩ =|GHZ_20⟩^⊗ 3⊗|GHZ_10⟩^⊗ 2⊗|GHZ_5⟩^⊗ 2⊗|Bell⟩^⊗ 5. In Ref. <cit.>, the hierarchical criteria for k-separability have been proposed using the full-body sector lengths S_n: any n-qubit k-separable state obeys S_n ≤ 3^k-1 2^n-(2k-1), if odd n, 3^k-1 (2^n-(2k-1)+1), if even n, for k=2,3,…, ⌊ (n-1)/2 ⌋. A violation of the inequality for some k implies that the state is at most (k-1)-separable. In particular, if the state violates the inequality with k=2, then it is verified to be genuinely n-partite entangled for n>4. §.§.§ Tripartite entanglement One idea to detect entanglement using second moments more efficiently is to consider linear combinations of full and marginal moments. Note that the sector lengths themselves are convex functions, but their combinations do not necessarily have to be convex. Recall that the relations between the sector lengths and the second moments are ℛ_A^(2) + ℛ_B^(2) + ℛ_C^(2)= S_1 and ℛ_AB^(2) + ℛ_BC^(2) + ℛ_CA^(2)= S_2, and ℛ_ABC^(2) = S_3, see Eq. (<ref>). In Ref. <cit.>, it has been shown that any fully separable three-qudit state obeys S_3 ≤ d-1 + 2d-3/3S_1 + d-3/3S_2, whereas any three-qudit state which is separable for any fixed bipartition obeys S_2 + S_3 ≤d^3-2/2(1+S_1). In the case of d=2, strong numerical evidence suggests that the above inequality also holds for mixtures of biseparable states with respect to different partitions <cit.>, discussed in Eq. (<ref>). This conjecture implies the presence of genuinely three-qubit entanglement, but its analytical proof has not yet been provided. It is essential to note that the criteria in Eqs. (<ref>, <ref>) for d=2 can be interpreted as the geometry of the three-qubit state space in terms of sector lengths (S_1, S_2, S_3) <cit.>. In this space, both criteria were shown to be much more effective in certifying entanglement than criteria based only on the full sector length S_3. In particular, Eq. (<ref>) can detect multipartite entanglement for mixtures of GHZ states and W states, even if the three-tangle <cit.> and the bipartite entanglement in the reduced subsystems vanish simultaneously <cit.>. §.§.§ Nonlinear functions of second moments In Ref. <cit.>, another criterion based on products of marginal moments was formulated. Instead of considering the factorisability of the correlation functions themselves, the factorisability of the second moments is considered with a purity-dependent bound. Namely, for two-qubit states, the inequality obeyed by all separable states reads ℳ_2≡ℛ_AB^(2) - ℛ_A^(2)ℛ_B^(2)≤ (4𝒫-1)/9 for 𝒫<1/2, 4(1-𝒫)𝒫 / 9 for 𝒫≥1/2, where 𝒫 = (ϱ_AB^2) is the state's purity, the normalisation constant for moments was chosen as N_n,d,t = 1 in Eq. (<ref>), and a product observable ℳ = σ_z ⊗σ_z was used. A violation of this inequality implies the presence of entanglement between the parties. Unlike the previously presented results, this criterion is expressed as the nonlinear combination of the second moments. It is essential that this nonlinearity can enhance the detection power compared to the criterion discussed in Eq. (<ref>). Additionally to the purity-dependent inequality for two-qubit states, inequalities for three- and four-qubit states have been obtained using numerical simulations. To indicate genuine tripartite (four-partite) entanglement, the value of ℳ_3 (ℳ_4) has to overcome the bounds given by ℳ_3 = ℛ_ABC^(2) - ℛ_A^(2)ℛ_BC^(2)- ℛ_B^(2)ℛ_AC^(2) - ℛ_C^(2)ℛ_AB^(2)≤827(1-𝒫)𝒫, ℳ_4 = ℛ^(2)_ABCD-1/2∑_M ℛ^(2)_M ℛ^(2)_M≤881(1-𝒫^2), where the summation is performed over all subsets of { ABCD } except for the full and empty set, and where M denotes the set complementary of M. Although the simple form of the latter two expressions raises hope for a generalisation to ℳ_n, i.e. for an expression comparing the second-order moment of the distribution of the correlations of an n-qubit state with the products of all second-order moments of the marginals, such an expression remained missing. This concept has been experimentally demonstrated using a pair of polarisation-entangled photons created by means of spontaneous parametric down conversion <cit.>. The two entangled photons are sent into two separate interferometers, making both path and polarisation degree accessible <cit.>. In this experiment, no knowledge about or direct control of the measurement directions is required. However, the angles of the wave plates which were required to set the measurement directions cannot be selected from a uniform distribution because this would lead to a non-uniform sampling of the local unitaries. Hence, to ensure a uniform distribution according to the Haar measure, i.e. to distribute the measurement directions following a uniform sampling of the local Bloch spheres, appropriate unitary transformations have been randomly picked to then bring the wave plates to the angles corresponding to the drawn unitary transformation. Four different types of four-qubit states were experimentally studied: a triseparable state, a biseparable state, a GHZ state, and a linear cluster state. After applying random local transformations to each of the four qubits, projective measurements allowed to retrieve the statistics of correlation values as shown in Fig. <ref>. The shape as well as the factorisability of the particular distributions directly visualise the entanglement structure of the state. For example, for the triseparable state |Bell⟩|0⟩|0⟩ the distribution of the modulus of correlation values for the marginal of the first two qubits, i.e. E_12≡[(U_1^†σ_1 U_1 ⊗ U_2^†σ_2 U_2 ⊗1⊗1) ϱ ], is almost uniform and not explainable by a product of the single-qubit marginals (panels E_1 and E_2). At the same time, the other two qubits individually already show large values for E_3 and E_4, respectively, sufficient to explain the distribution E_34. This graphical analysis illustrates how a criterion for GME allows to probe the entire entanglement structure of a state when applied to different partitions and combinations of subsystems. The rightmost panel in Fig. <ref> shows the results of this structural analysis for the four experimentally prepared states, summarizing which states and marginals are violating the respective bounds. The top-most subplot (ℳ_4) indicates that both the GHZ and the linear cluster state are detected to be genuinely four-partite entangled, which is not detected for the biseparable or the triseparable state (as it should be). ℳ_2, on the other hand, shows that for the biseparable state |Bell⟩|ψ_ent⟩ the two-qubit marginals are still entangled, whereas the triseparable state carries entanglement solely between the first two qubits. §.§.§ Discriminating W-class entanglement In multipartite systems, the structure of entanglement becomes much richer and more complicated than in bipartite systems. In the bipartite state space, there exists an ordering structure in terms of quantum resource theories <cit.>. In this sense, the maximally entangled state can be defined as the entangled state that enables to create any bipartite entanglement by LOCC operations. On the other hand, in multipartite systems, such an ordering structure does not exist anymore and the notion of maximally entangled states cannot be defined uniquely. In fact, already three-qubit pure states are divided into two classes: the GHZ class and the W class. The GHZ state cannot be transformed to the W state and vice versa with LOCC operations, even if they are not required to reach the state with probability one (so-called stochastic LOCC (SLOCC) operations) <cit.>. This distinction leads to different roles the GME states play in information processing tasks <cit.>. For four-qubit states, there is already an infinite collection of such classes, which can be grouped into nine families <cit.>. Therefore, in addition to entanglement detection, another interesting and important issue is to determine which class a given multipartite state belongs to. The discrimination of W-class entanglement has been studied with two criteria based on the second and fourth moments of randomised measurements <cit.>. The first is an analytical upper bound for the second moment of a n-qubit W-class state: S_n ≤ 5-4/n, where we set S_n = ℛ^(2) and the inequality is saturated by a pure W state: |W⟩ = (|10⋯ 0⟩ + |01⋯ 0⟩ + ⋯ +|0⋯ 01⟩)/√(n). The violation of this condition implies that a multiqubit state is detected to be outside of the W class, i.e. it cannot be obtained from the W state by SLOCC. The second criterion uses a linear combination of the second and fourth moments, with weights optimised based on the n-qubit W state |W_n⟩ and the biseparable state |W_n-1⟩|ψ⟩. Furthermore, Ref. <cit.> has provided the characterisation of three-qubit and four-qubit states using the second and fourth moments from an extensive numerical approach. §.§ Finite datasets and single-setting entanglement detection Since in experimental practice always only finite datasets are available, statistical noise additionally complicates tasks such as the certification of entanglement, which is of course also true for the scenario of randomised measurements. In this section, we present tools to analyse and quantify these statistical effects and discuss how they affect the moments of probability distributions making it possible to refine the various entanglement criteria accordingly. For more details about the field of statistical analysis on quantum systems, see Refs. <cit.>. §.§.§ Estimation of correlation functions Statistical noise in the scenario of randomised measurements has two sources. The first is a result of the quantum nature of the system, for which the correlation tensor element T_i (for fixed measurement settings collectively indexed by i) cannot be measured directly but has to be determined in a statistical manner via the repetition of a probabilistic measurement which can yield a set of discrete outcomes. These outcomes are distributed around the mean value μ_i = T_i with a variance of σ_i^2, which in general is a function of T_i as well. Formally, this series of m measurements can be expressed as a set of independent and identically distributed (i.i.d.) random variables {X_1,X_2,…, X_m} with the same mean and variance. In this case the sample mean X = ∑ X_i / m is an unbiased and consistent estimator for μ_i, since its expectation value is E(X) = μ_i and X approaches μ_i for increasing m. The second type of statistical noise is the propagation of the fluctuation of the measured values of T_i to any quantity calculated from them, such as for example the second moment of their distribution ℛ^(2). Note that in this case the fluctuation of each particular T_i additionally combines with noise due to sampling of only a finite amount of different T_i's (different settings). For a set of m measured correlation values {T_1,T_2,…T_m} the estimator ℛ^(2) = ∑T^2_i / m albeit consistent is, however, biased. Even though each particular T_i is an unbiased estimator for the corresponding T_i, the value of ℛ^(2) is systematically increased due to taking the square. Both the statistical fluctuation due to finite sample size as well as any systematic bias has to be taken into account when applying the entanglement criteria based on a bound violation, such as for example in Eq. (<ref>). When this is properly considered, it turns out that even very limited data sets can lead to a valid conclusion about the entanglement in the system <cit.>. A common way to quantify how much the values of a general estimator θ deviate from the actual parameter θ is based on the likelihood. This is generally defined as the probability of the data given some assumptions or models (e.g., the probability of tomographic data given some quantum state ϱ). More specifically, one can consider the probability that the an estimator takes has an observed value for a given value of the parameter, Prob(θ | θ). It allows us to calculate the p-value, i.e. the probability that for a certain θ, the estimated value of θ will be sufficiently close. The usual definition is Prob( |θ - θ| ≥δ) ≤α, where δ is called the error or accuracy, δ/θ the relative error, α the statistical significance level, and γ = 1-α the confidence level. A practical tool to estimate p-values are so-called concentration inequalities such as the Chebyshev inequality which for example allows to estimate the probability that the sample mean X from a sample of m observations will be close to mean value as Prob(|X-μ| ≥δ) ≤σ^2/m δ^2. This relation allows us to estimate the minimal number of measurements m to achieve a certain significance level. §.§.§ Statistical significance and randomised measurements To evaluate the statistical effect when applying criteria based on quantities like ℛ^(t), it is crucial to quantify how much the estimated value can deviate from the parameter of interest, as expressed in Eq. (<ref>). As before, we consider an experiment in which a sample of M different measurement settings is chosen randomly and for each setting, measurements are performed on K state copies, see Fig. <ref>. An unbiased estimator for the moment can be given by ℛ^(t) = 1/M∑_i=1^M [E_t]_i. In this expression, E_t denotes the unbiased estimator of E^t (the t-th power of the correlation function E) that is obtained from the K measurements and the subscript i refers to the setting. Even in the case where [E_t]_i cannot be assumed to be i.i.d. random variables, we can find deviation bounds such as Eq. (<ref>) based on the variance of ℛ^(t) using the Chebyshev-Cantelli inequality Prob(|ℛ^(t) -ℛ^(t)| ≥δ) ≤2 Var(ℛ^(t))/Var(ℛ^(t)) + δ^2. This leads to a minimal two-sided error bar δ_error =√((1+γ)/(1-γ))√(Var(ℛ^(t))), which guarantees the confidence level γ = 1-α. For instance, in the estimation of the second moment ℛ^(2) for an n-qubit state with a product observable, the expression of the variance is Var(ℛ^(2)) = 1/M[ A(K) ℛ^(4) + B(K) ℛ^(2) + C(K) - (ℛ^(2))^2 ], where A(K), B(K), C(K) are determined through the properties of the binomial distribution, and has been derived in Ref. <cit.>. From this result, the total number of measurements for the precise estimation of M_total = M × K can be determined depending on the state under consideration and required accuracy δ and confidence level γ = 1-α. The dependence of the second moment on the squared correlations gives rise to a systematic error that must be taken into account. Reference <cit.> proposes to mitigate this with the use of Bayesian methods. This requires establishing the probability P(T) with which a given value of correlations T, estimated after K trials, occurs in the experiment. The second moment written in this language is ∫_-1^1d T T^2 P(T) = ∫_-1^1 ∫_-1^1 d T dT T^2 P(T|T) P(T). The Bayes theorem then gives P(T|T) = P(T | T) P̃(T) / P(T), where P̃(T) represents the prior assumption about the unknown ideal distribution P(T). In practice this prior can be chosen as estimated P(T) and the conditional probability P(T | T) can be assumed to be a normal distribution centred at T and with variance (1-T^2) / K, leading to an updated estimation of the second moment ℛ̃^(2). §.§.§ Certification of entanglement In the case of entanglement detection with finite statistics the estimated second moment ℛ̃^(2) may happen to be larger than 1 also for a product state, as discussed in Ref. <cit.>. In such a scenario, an entanglement indicator can be formulated as ℛ̃^(2) > 1 + δlikely |ψ⟩ is entangled, where ε determines the confidence of entanglement detection. Given sufficiently big M and K a normal distribution a standard deviation Δ_M,K approximates the distribution of values of ℛ̃^(2) for any state. A significance level of 5% can be satisfied by setting δ=2 Δ_M,K in which case the probability that a separable state will yield a value within the specified bound is 95.4%. Hence, a measurement of a value higher than 1 + 2 Δ_M,K detects entanglement with a p-value of 4.6% and confidence level of at least 95%. The statistical analysis gives rise to the possibility of entanglement detection with a single randomly chosen measurement setting. Table <ref> presents the probability that n-qubit GHZ state violates the bound ℛ^(2)_M,K>1 + δ, with a single randomly chosen setting (M=1) and p-value of 4.6%. The case of ideal quantum predictions (K →∞) is compared with K = 1000 repetitions. For the ideal predictions, the probability violation grows with the number of qubits whereas for finite statistics the detection probability first increases but then decays. § FUNCTIONS OF STATES Addressing the task of entanglement detection is vital for many quantum information applications. Nevertheless, it is not the only relevant question for quantum state analysis. One may also be interested in the functions of states, which not only can be used to verify entanglement but also to provide a description of other state features. Many quantum mechanical notions, such as the purity or PT moments explained above, are non-linear functions of the state and require in their standard definition the knowledge of the complete density matrix for their calculation, see Eq. (<ref>) and Eq. (<ref>) respectively. This, however, poses a challenge in experimental settings due to the exponential growth in the number of measurements required for state tomography as the number of particles increases. Randomised measurements have emerged as a valuable approach in this domain, enabling the extraction of these quantities from experimental data with considerably fewer resources. They can be accessed through properly designed functions of probabilities over outcomes of measurements in product bases, statistical moments and other post-processing techniques. The following section provides an overview of approaches for the estimation of state functions through the use of these techniques. In addition, we will explain the notion of shadow tomography in Sec. <ref>, where randomised measurements also play a pivotal role. §.§ Determination of invariant properties of states A range of unitarily invariant quantities has been shown to be accessible by randomised measurements. In contrast to the methods discussed in Sec. <ref>, which rely mainly on the determination of statistical moments of the distribution of correlations, the approaches in this and the following subsections are based on the direct analysis of the occurring frequencies or probabilities. §.§.§ Theoretical results In the most general scenario, one starts with a set of (potentially nonlocal) unitary transformations { U } and a product basis { |s ⟩} ={ |s_1 … s_n ⟩} in which the measurements are performed. Without loosing generality, one can assume this to be the computational basis, as all quantities of interest in this section are invariant under a local basis change. Then, the probabilities P_U(s)=(U ϱ U^†|s⟩⟨s|) of a string of results can be inferred from the experimental data. Consequently, properly designed functions of P_U(s) averaged, e.g. over a Haar-distributed set of unitaries { U }, allow the extraction of multiple state functions. Since the purity is invariant under any unitary transformation, it can even be accessed if the distribution is averaged under random global unitary transformations. Indeed, it was already shown early that the purity of a state can be expressed as a linear function of the ensemble average of P_U(s)^2 <cit.>. Experimentally, however, the implementation of global random unitary transformations in systems with local interactions requires significant resources <cit.> and a simpler protocol is desirable. Indeed, if local unitary rotations are considered it was shown in Ref. <cit.> that the purity of n-qubit states can be expressed as 𝒫(ϱ)=2^n ∑_s,s'(-2)^-D[s,s']P_U(s)P_U'(s'). Here, U is a random local unitary according to the Haar measure and ⋯ denotes the ensemble average over all pairs U, U' of these unitaries. Furthermore, D[s,s'] is the Hamming distance between two-bit strings, denoting the number of elements in which two n-tuples of measurement results differ. Note that for the qubit case, the measurements in x, y, and z direction form a spherical three-design, so one can replace the random unitaries by three measurements in these directions, see Sec. <ref>. Also, note that the purity 𝒫(ϱ) = (ϱ^2) = (ϱ⊗ϱ S) can be written as an expectation value of two copies of a state, which connects Eq. (<ref>) to the SWAP trick in Sec. <ref>. For two qubits, this expression can be understood in terms of the sector length. As mentioned above, we can consider local Pauli measurements, resulting in nine possible measurement combinations. Let us focus on a single term where, for definiteness, σ_z is measured on both particles. In this case, the Hamming distance can take the values 0, 1 or 2. Consequently, the terms in the sum have prefactors 1, -1/2 or 1/4. The occurring frequencies can be derived from products of the probabilities P(00), P(01), P(10), and P(11) of the results of a σ_z ⊗σ_z measurement. These products can also be expanded in terms of the squared expectation values ⟨σ_z ⊗σ_z⟩^2, ⟨σ_z ⊗1⟩^2, ⟨1⊗σ_z⟩^2, and ⟨1⊗1⟩^2. Indeed, one finds after a short calculation ∑_s,s' (-2)^-D[s, s'] P(s) P(s') = 1×(P(00)^2 + P(01)^2 + P(10)^2 + P(11)^2 ) -1/2(P(00)P(01) + P(10)P(11) + … + P(10)P(11) ) +1/4 (P(00)P(11)) + P(11)P(00) + P(01)P(10) + P(10)P(01)) = 9/8⟨σ_z ⊗σ_z⟩^2 + 3/8 (⟨σ_z ⊗1⟩^2+⟨1⊗σ_z⟩^2) + 1/8⟨1⊗1⟩^2. Averaging these terms over all nine possible measurement combinations would introduce additional prefactors occurring due to LU invariance of sector lengths, see Sec. <ref>. Then Eq. (<ref>) contains a sum over 9 different tensor products of Pauli measurements. Each of those will recover one term of the type ⟨σ_i ⊗σ_j⟩^2, certain triples of them will give the marginal terms like ⟨σ_i ⊗1⟩^2 giving the same weight as the two-body correlation ⟨σ_i ⊗σ_j⟩^2 and all nine contribute to the term ⟨1⊗1⟩^2. Knowing this, it is clear that the formula (<ref>) recovers the purity, which is proportional to the total sector length in Eq. (<ref>). This reasoning can be further generalised to n-qudit systems <cit.>. While Eq. (<ref>) was formulated for global systems, it can be easily applied to access the information about the purity of a subsystem, by considering random unitary operations and projective measurements on the subsystem i only. Therefore, the probabilities P_U_i(s_i), explicitly given as (U_i ϱ_i U_i^†|s_i⟩⟨s_i|), allow to recover the reduced state's purity. In the case of global unitary operations, the discussed approach recovers the original results presented by van Enk and Beenakker, but note that in this case the Hamming distance needs to be redefined <cit.>. Other interesting quantities are state overlaps and state fidelities. As shown in Ref. <cit.>, randomised measurements allow the cross-platform estimation of ℱ(ϱ_1,ϱ_2) =(ϱ_1 ϱ_2)/max{𝒫(ϱ_1),𝒫(ϱ_2) }, with subscripts 1 and 2 referring to the states of two different quantum devices, labelled by 1 and 2. The above fidelity between mixed states was first proposed in Ref. <cit.> as a fidelity measure for mixed quantum states. It fulfils all of Jozsa's axioms <cit.> and can be interpreted as a Hilbert-Schmidt product of the two states normalised by their maximal purity. Contrary to the widely used Uhlmann fidelity <cit.>, it is easier to compute. Evaluation of Eq. (<ref>) is done via the randomised measurement of cross-correlations. That means that in Eq. (<ref>) different randomly chosen unitaries U and U' are applied to the output state of the two platforms independently. Again, this can be understood as a slight generalisation of Eq. (<ref>); it can also be understood as a version of the SWAP trick in the form of (ϱ_1 ϱ_2) = (ϱ_1 ⊗ϱ_2 S). In this spirit, the presented approach to randomised measurements is not limited to informational concepts only. It can also provide protocols for the measurement of many-body topological invariants (MBTIs) of symmetry-protected-topological phases. Likewise the Rényi entropy, MTBIs are non-linear functions of the reduced density matrices. The authors in Ref. <cit.> proposed a measurement scheme for MTBIs associated with the partial reflection, time reversal and internal symmetry <cit.> of one-dimensional interacting bosonic systems. Both of these quantities can be estimated using ensemble averages in a similar manner as in Eq. (<ref>). §.§.§ Experimental implementations A significant amount of experimental work was invested into demonstrating the feasibility of the above discussed techniques. Starting from the result presented in Eq. (<ref>), the second-order Rényi entropy defined in Eq. (<ref>) was used to measure the entanglement entropy in an ion system. The measurements were performed for partitions of up to 10 qubits of a string of 20 qubits in a trapped-ion quantum simulator using ^40Ca^+ <cit.>, see Fig. <ref> for a part of the results. This experiment involved an application of M=500 unitary operations, generated numerically according to an algorithm given in Ref. <cit.>, each followed by K=150 repetitions of measurement. The measurement of the entropy over the time evolution of a state allows to study the dynamical properties of quantum many-body systems. A ballistic (linear) entropy growth is present for an interacting quantum system without disorder which will reach thermalisation, whereas a logarithmic entropy growth is expected for a system with strong disorder and sufficiently short-ranged interactions. The latter system will exhibit many-body localisation (MBL) <cit.>. Measurements of the entropy growth in Ref. <cit.> showed the diminishing effect of local, random disorder on the growth rate at early times compared to a system without disorder indicating localisation. Furthermore, the evolution of the second-order Rényi mutual information was detected providing another indicator of localisation due to the presence of disorder in the system. Another quantity which is related to multipartite entanglement of many-body systems is the quantum Fisher information (QFI) <cit.>. The inverse of the QFI sets a fundamental limit on the accuracy of parameter-estimation measurements and thus quantifies the potential of a quantum state in metrological applications <cit.>. In Ref. <cit.> the QFI was estimated via randomised measurements using two different platforms: a nitrogen-vacancy (NV) center spin in diamond and a superconducting four-qubit state provided by IBM Quantum Experience. For the one-qubit NV system, the dynamical evolution of the QFI was measured showing the applicability of the scheme to both pure and mixed quantum states. In the latter case, for multi-qubit states a lower bound on the QFI is set <cit.>. Randomised measurements also provide a powerful and experimentally feasible method to probe quantum dynamics. One example is quantum information scrambling which describes how initially localised quantum information becomes, during time evolution, increasingly nonlocal and distributed over the entire system under consideration. Scrambling can be measured via the decay of an out-of-time-order (OTO) correlation function. Randomised measurements allowed for the measurement of these OTO correlation functions in a four-qubit nuclear magnetic resonance based quantum simulator <cit.> and in a system of 10 trapped ions with local interactions of tunable range <cit.>. §.§ Estimation of higher PT moments As explained in Sec. <ref>, bipartite entanglement of ϱ_AB can be detected using the so-called PT moments, denoted as p_k = [(ϱ_AB^Γ_B)^k]. The second PT moment p_2 is nothing but the state's purity and its estimation from randomised measurements was already discussed in the previous section. Further analysis of a state, of course, involves higher-order PT moments that include essential information about the state. As an example, it was shown that they become especially useful for extracting the logarithmic negativity <cit.>. So the question arises how to obtain higher PT moments from randomised measurements. To address this, one needs to develop strategies of randomised measurements, such as generalisations of the SWAP trick mentioned in Sec. <ref>. In the following, we will focus particularly on the third p_3 moment, reviewing the results given in Refs. <cit.>. Let us begin by noticing that p_3 can be rewritten using a Hermitian operator M_neg acting on three copies of ϱ_AB, p_3 = [ϱ_AB^⊗ 3 M_neg], M_neg = 1/2(W_cyc^A ⊗ W_inv^B + W_inv^A ⊗ W_cyc^B ), W_inv^X = (W_cyc^X)^-1 =(W_cyc^X)^T, where W_cyc^X is the cyclic operator with W_cyc^X |s_1, s_2, s_3⟩ = |s_2, s_3, s_1⟩ acting on the subsystem X=A,B of the three copies, for details see Sec. <ref>. With this expression, one can understand the operator M_neg in the scheme of randomised measurements. First, note that the third moment ℛ_ℳ^(3) defined in Eq. (<ref>) with an observable ℳ acting on the whole system AB can be reformulated as ℛ^(3)_ℳ(ϱ_AB) =N_2,d,3[ ϱ_AB^⊗ 3Φ_A^(3)⊗Φ_B^(3)(ℳ) ]. Here we denoted Φ_X^(t)(X) = ∫dμ(U_X) U^⊗ t X^⊗ t (U^†)^⊗ t as the t-fold twirling operation on system X=A,B, see Sec. <ref>. This representation gives an alternative interpretation of the t-th moment ℛ_ℳ^(t)(ϱ) as an expectation of a locally-twirled observable on t copies of the state ϱ. So, the question arises how to choose the observable ℳ such that the desired M_neg is achieved by twirling, Φ_A^(3)⊗Φ_B^(3)(ℳ). To proceed, notice that the operator M_neg can be also decomposed into M_neg = 1/4 (M_+^A ⊗ M_+^B - M_-^A ⊗ M_-^B), where M_±^X = W_cyc^X ± W_inv^X for X=A,B. In Ref. <cit.>, each of these terms was shown to be accessible with randomised measurements. More precisely, there exist a product observable ℳ_P = ℳ_A ⊗ℳ_B and a non-product (Bell-basis) observable ℳ_Bell such that Φ_A^(3)⊗Φ_B^(3)(ℳ_P) = M_+^A ⊗ M_+^B, Φ_A^(3)⊗Φ_B^(3)(ℳ_Bell) = M_-^A ⊗ M_-^B. This immediately leads to the result M_neg = (1/4) Φ_A^(3)⊗Φ_B^(3)(ℳ_P - ℳ_Bell), showing that the operator M_neg can be realised by combining the product and non-product observables from randomised measurement schemes. In a similar manner, higher-order PT moments can be evaluated as an expectation value of a permutation operator acting on k copies of ϱ_AB <cit.> p_k=[ ϱ_AB^⊗ k W_cyc^A ⊗ W_inv^B ], where W_cyc^A and W_inv^B are k-copy cyclic operators W_cyc^A |s_1, ⋯, s_k⟩ =|s_2, ⋯, s_k, s_1⟩ and W_inv^B |s_1, ⋯, s_k⟩ =|s_k, s_1, ⋯, s_k-1⟩ acting on partition A or B. In the following, we shortly explain another way to estimate the higher moments p_k based on tomographic methods shown in Refs. <cit.>, for more details see also Sec. <ref> below. A key idea is to create the unbiased estimator ϱ̂_AB^(r) for ϱ_AB from a data set of projective measurements results with different random unitaries r=1,2,…, M. Based on this, one can define the unbiased estimators for the PT moments as p̂_k=1/k!M k^-1∑_r_1 ≠⋯≠ r_k[ W_cyc^A ⊗ W_inv^B ϱ̂_AB^(r_1)⊗⋯⊗ϱ̂_AB^(r_k)], where r_i labels the data acquired from the measurements performed after the action of rth unitary. This results from the so-called U-statistics properties and the factorisation of ϱ_AB^⊗ k into the products. Implementing these techniques can estimate the PT moments, therefore allowing for the certification of entanglement through p_k-PPT and the optimal p_k-OPPT criteria discussed in Sec. <ref>. §.§ Makhlin invariants The previous sections described the determination of the purity of states or PT moments using randomised measurements, while Sec. <ref> discussed the statistical moments ℛ^(t) as correlation-type quantities such as sector lengths. All the resulting quantities were LU invariant and from arguments as in Eqs. (<ref>) and (<ref>) one can infer that arbitrary LU invariants can be measured with randomized measurements. So, the question arises how randomised measurements can be used to completely characterise the LU orbit of a quantum state. For two qubits, there is indeed a complete set of LU invariants, the so-called Makhlin invariants and here we explain how they can be determined with randomised measurements <cit.>. Let us begin by recalling that two quantum states ϱ and σ are called LU equivalent if and only if one can be transformed into the other by local unitary operation U_A ⊗ U_B, that is, ϱ = (U_A ⊗ U_B)σ (U_A^†⊗ U_B^†). Clearly, two LU equivalent states have the same values of quantities invariant under LU operations. Conversely, one may ask whether there is a (finite) set of invariants, such that two states are LU equivalent if they have the same values for these invariants. In two-qubit systems, this question was answered by Makhlin <cit.>. It has been shown that two two-qubit states ϱ and σ are LU equivalent if and only if they have equal values of all 18 LU invariants I_1, … ,I_18, nowadays called the Makhlin invariants. Using the notation in Eq. (<ref>) from Sec. <ref>, the first three invariants are given by: I_1 = (T), I_2 = (TT^⊤), I_3 = (TT^⊤ TT^⊤). These three invariants are already sufficient to compute a potential violation of the CHSH quantity, given by S(ϱ) = 2√(λ_1 + λ_2), as discussed in Sec. <ref>. This is because the two largest eigenvalues λ_1 and λ_2 of the matrix TT^⊤ can be obtained from its characteristic polynomial and therefore can be computed from I_1, I_2, and I_3. Reference <cit.> has demonstrated that these invariants can be accessed from the moments ℛ^(t) from randomised measurements, providing a tool to certify Bell nonlocality in a reference-frame-independent manner. Using a similar approach, one can derive a lower bound on the teleportation fidelity of the state <cit.>. It is worth noting that I_2 and I_3 are also invariant under the partial transposition of a state, while I_1 flips the sign. This distinction corresponds to the fact that I_2 and I_3 can be obtained using randomised measurements with the product observable ℳ_P = σ_z ⊗σ_z, whereas I_1 comes from the non-product observable ℳ_NP = ∑_i=x,y,zσ_i ⊗σ_i. In addition, the invariant I_14 = (H_a T H_b^⊤ T^⊤) also flips the sign under partial transposition, where (H_x)_ij = ∑_k=x,y,zϵ_ijk x_k represents the elements of a skew-symmetric matrix constructed from the Bloch vectors of the reduced states x_k = a_k, b_k and the Levi-Civita symbol ϵ_ijk. Also, the invariant I_14 was shown to be obtained from combinations of different non-product observables: ℳ_NP^± = 1⊗σ_x + σ_x ⊗1 + σ_y ⊗σ_z ±σ_z ⊗σ_y. The LU invariants I_1 and I_14 are sensitive to partial transposition and then play a crucial role in implementing the PPT criterion in randomised measurements, as explained in Sec. <ref>. Let us shortly explain the experimental scheme to obtain the LU invariants I_1, I_2, and I_3 from the moments ℛ^(t) of randomised measurements <cit.>. The creation of two-qubit states was implemented by polarisation-entangled photon pairs from an entangled photon source (EPS). This generates signal and idler photon pairs via four-wave mixing in a dispersion-shifted fiber (DFS). In the detector station (DS), each photon is detected with an efficiency of about 20 % and dark count probabilities about 4 × 10^-5 per gate. The experimental generation of random unitaries has been accomplished using polarisation scramblers (SCR) as depicted in Fig. <ref>. These can rapidly change the Stokes vector in the Bloch sphere and create random polarisation rotations. Clearly, it is necessary to check that a set of unitaries generated in this way is really Haar random. Here, it is sufficient to show that the used random unitaries form a unitary t-design with an appropriate order t, depending on the degrees of the polynomial used in the evaluation. One can confirm the degree of the Haar randomness for a set of unitaries by computing the frame potential, discussed in Sec. <ref>. Recall that unitary t-designs only achieve the minimal value of the frame potential and can correspond to the t-th moments ℛ^(t) from randomised measurements. Then, to determine the LU invariants I_1, I_2, and I_3, one has to evaluate the frame potential for a finite set of unitaries drawn order up to four and compare it with the minimal value. Finally, the LU invariants I_1, I_2, I_3 were measured using their unbiased estimators I_1, I_2, I_3 from the experimental data of the set of unitaries generated, and their statistical behaviour is illustrated in Fig. <ref>. This analysis aimed to certify Bell's nonlocality and assess the usefulness of quantum teleportation. §.§ Randomised measurements and shadow tomography Let us finally explain how randomised measurements can be used to obtain information about a quantum state in the framework of shadow tomography. Generally, quantum state tomography refers to the task of determining the entire density matrix from measurement data <cit.>. As the number of free parameters in the density matrix scales exponentially with the number of particles, this becomes in practice unfeasible already for a modest number of qubits. Several methods have been suggested to circumvent the scaling problem. This includes matrix-product-state tomography <cit.> and compressed sensing <cit.>, which work well if the state under scrutiny obeys certain constraints, e.g.  it has a high purity. Another example is permutationally invariant tomography <cit.>, where relevant subspaces of the entire Hilbert space are probed. Shadow tomography, proposed theoretically in Ref. <cit.> and suggested as a practical procedure in Ref. <cit.>, is also a scheme to reduce the experimental effort, but here the idea is to redefine the task of tomography in a meaningful way. In standard tomography, one determines a density matrix ϱ, and this is typically used for predicting the expectation values of observables that were not measured. Shadow tomography can be understood as a method to predict future measurements with high probability as good as possible <cit.>. In a first data collection phase, measurements are carried out and for each result an (unbiased) estimator of the quantum state is recorded. The collection of all these estimators is called the classical shadow of ϱ and, importantly, storing these data can be achieved with a moderate effort. As the name suggests, this task does not aim to recover the full density matrix ϱ, but only the shadow that ϱ casts on the measurements. In the second phase, called the prediction phase, the classical shadow is used to predict other observables that were not measured in the data collection phase. Two questions are relevant: First, how does one obtain estimators for the quantum state from a single measurement result? Second, how to choose the measurements in the data collection phase in order to predict many other measurements later with high accuracy? Let us elaborate on these points explicitly, following the description of Ref. <cit.>. Assume that some generalised measurement (or positive operator-valued measure, POVM) is carried out on a quantum state defined by the density matrix ϱ. This measurement consists of a collection of effects E={E_i} which are positive (E_i ≥ 0) and normalised (∑_i E_i = 1) and the probabilities for the outcomes are given by p_i = (ϱ E_i). This POVM defines a map from the space of density matrices to the probability distributions via Φ_E(ϱ) ={(ϱ E_i)}, and the adjoint Φ_E^† maps probability distributions to operators. In general, one can ask how to infer from an observed probability distribution the state ϱ. Performing a single measurement of the POVM leads to a single outcome i, so that the distribution over the outcome probabilities q_i corresponds to a δ-type probability distribution with {q_i}=δ_ik. The problem of how to assign an estimator to such a single-point distribution is indeed known and solved in the fields of data science and machine learning <cit.>. The least square estimator is the so-called shadow estimator χ and is as a map given by: χ = (Φ_E^†Φ_E)^-1Φ_E^†. To assure the existence of the inverse of C_E=Φ_E^†Φ_E, it is assumed that the POVM is informationally complete, that is, the state ϱ can be reconstructed from the full set of probabilities p_i. With the given definition of C_E one also directly finds that C_E(ϱ)=∑_k(ϱ E_k) E_k. Then, for the δ-type distribution {q_i}=δ_ik we have that Φ_E^†({q_i}) = E_k, leading to the estimate of ϱ from a single data point, ϱ̂_k = C_E^-1(E_k). This estimator, however, does not need to be a proper quantum state; especially, ϱ̂_k may have negative eigenvalues. So far, the discussion was general and the POVM E was not specified. The key point is now that randomised measurements can be viewed as such a POVM. If one measures randomly one of the three Pauli matrices σ_x, σ_y, or σ_z on a single qubit, this can be seen as a six-outcome POVM with the effects {|x^±⟩⟨x^±|/3, |y^±⟩⟨y^±|/3, |z^±⟩⟨z^±|/3 }. For these effects, one can directly calculate that the shadow estimator for outcome k± is given by ϱ̂_k = 3 |k^±⟩⟨k^±| - 1. This is not a positive semidefinite density matrix, but this does not affect it usefulness for predicting future measurements. If random Pauli measurements are carried out on n qubits, the global POVM has 6^n outcomes. Most importantly, the shadow estimator is the tensor product of the single-qubit estimators as in Eq. (<ref>). This product structure allows to store the multi-qubit shadow efficiently. Let us now discuss the error of this procedure, if one wishes to estimate an arbitrary observable X from the collection of shadows {ϱ̂_k_ℓ}, where ℓ= 1, … , M denotes the number of measurements. First, the mean value (1/M) ∑_ℓ(X ϱ̂_k_ℓ) converges to ⟨ X ⟩ = (X ϱ). The precision of the estimation is quantified by the variance in a single experiment, Var[(X ϱ̂)] = ∑_k (X ϱ̂_k)^2 (ϱ E_k) - ⟨ X ⟩^2, which is upper bounded by the shadow norm ‖ X ‖^2_E := ‖∑_k(X ϱ̂_k)^2 E_k ‖_ op, where ‖…‖_ op denotes the largest eigenvalue of an operator. Given a set 𝒳 of observables that one wishes to predict, the quality of a shadow tomography scheme can be quantified by κ^2_E = max{‖ X ‖^2_E, X ∈𝒳}. Clearly, this depends on the set 𝒳 and the POVM E chosen. Here, it turns out that the random choice of three Paulis per qubit is often the optimal choice of the POVM <cit.>. Finally, it is also worth noting that shadow tomography with local randomised Pauli measurements is a well suited tool for estimating observables X which act on few qubits only. If X acts on L qubits, then it is determined by the mean values of 3^L tensor products of Paulis. In a shadow tomography scheme with M randomised settings, each of these settings has, on average, been measured M/3^L times. This provides sufficient information if M is large enough, but note that the required M does not depend on the number n of particles. This idea can be translated to rigorous statistical statements for local observables, see Ref. <cit.> for details. The scheme proposed in Ref. <cit.> was implemented experimentally using four-qubit GHZ states encoded with polarisation-entangled photons <cit.>. Three schemes with uniform, biased and derandomised classical shadows were compared to conventional ones that sequentially measure each state function using importance sampling or observable grouping, where the derandomised classical shadow method was shown to outperform other advanced measurement schemes. For other experimental implementations see Refs. <cit.>. § NON-LOCAL CORRELATIONS In Sec. <ref> we introduced the notion of non-local correlations and the framework of Bell-type inequalities to test them. In general, correlations not only depend on the quantum state under consideration, but also on the choice of measurements. As already discussed in the simple example of the CHSH inequality (<ref>), in order to achieve maximal violation, the measurement settings must be chosen carefully. This requires great care in the preparation of the experiment. In the case of multi-observer Bell tests, where the efficiency of the experimental setup is much more challenging to maintain, the situation becomes even more complicated. In this chapter, we outline how nevertheless it is possible to study quantum correlations also with randomly selected measurements. In general, to verify the presence of correlations that violate a Bell inequality (and its extensions to multipartite scenarios), it is necessary to measure several combinations of local measurement settings across the relevant parties for an entangled state. For example, let us consider the case of the CHSH inequality in a randomised measurement scenario. To measure the first expectation value E(𝐮_1,𝐮_2), the first observer measures in the randomly chosen setting 𝐮_1, while the second observer uses the randomly chosen setting 𝐮_2. For the second expectation value, the second observer can switch to the randomly selected setting 𝐮^'_2, while the first observer remains at setting 𝐮_1. It becomes clear that for the third expectation value the second observer now has to return to the previously employed setting 𝐮_2 in order to measure it jointly with a new setting of the first observer. Therefore, in all such experiments it is necessary to be able to revisit previously employed settings. Note that in typical implementations of so-called “loophole-free” Bell tests which exclude Bell-local models as possible explanations of quantum phenomena, it is additionally necessary to switch between the local settings on a shot-to-shot basis without predetermining the subsequent setting. The type of randomness which can be present when trying to violate a Bell-type inequality, thus, cannot consist of a fluctuating noise or a total lack of control over the settings. Rather either random but fixed rotations between the reference frames of different observers (often denoted in the literature as “misaligned devices”) or even fixed, random rotations when choosing different settings within each local frame (denoted as “uncalibrated devices”) are suitable. These random rotations need to stay fixed on the timescale of data acquisition for at least one run of the respective experiment. The only exception to this is the average correlation <ref> which is based on the first moment of the distribution of correlation functions and is thus compatible with full randomness, as in the criteria of Secs. <ref> and <ref>. §.§ Probability of violation In general, when measurement settings are chosen randomly, the violation of Bell-type inequalities such as the CHSH inequality is no longer assured, even for a highly entangled state. Thus, a natural way to quantify the strength of quantum correlations in a particular system is to ask what is the probability of violation of any CHSH inequality if observers choose observables at random without caring about the precise determination of the optimal measurement settings. Such a probability (also called the “volume of nonlocality” <cit.> or “nonlocal fraction” <cit.>) can be expressed by 𝒫_V^ CHSH = ∫_ΩdΩ f_ CHSH(Ω) , where the integration is over all parameters that define the observable (local measurement settings) and f_ CHSH = 1 for settings that violate any of the CHSH inequalities and f_ CHSH = 0 otherwise. Apart from CHSH inequalities, the probability of violation 𝒫_V^I can of course also be formulated for any other family of Bell-type inequalities I. While in general the probability of violation depends on the choice of measure dΩ, the Haar-measure emerges as a natural choice, since in this case the probability of violation becomes invariant under local unitary rotations applied to the state <cit.>. In Ref. <cit.> the probability of violation of the CHSH inequality by the Bell state |ϕ^+⟩ = (|00 ⟩ + |11 ⟩)/√(2) was derived analytically as 𝒫_V^ CHSH = 2(π-3) ≈ 28.32%. Furthermore, also in Ref. <cit.>, the probabilities of violation for complete sets of MABK <cit.> and WWWŻB <cit.> inequalities by the n-particle GHZ state have been determined numerically. The violation increases with the number of particles but seems to converge asymptotically (for large n) to a value strictly below unity. §.§ Strength of non-locality The non-locality of a state ϱ can not only be quantified via the probability of violation but also through the strength of non-locality 𝒮 <cit.>. This quantity is based on testing how robust the correlations are under the addition of white noise. The strength is obtained by considering the noisy state ϱ parameterized by the noise parameter v, so-called visibility, with ϱ(v) = v ϱ + (1-v)1/d^n1 and finding the critical visibility v=v_crit below which no inequality is violated for a given set of measurement settings. The value v_crit provides the strength of non-locality via 𝒮 = 1 - v_crit. The critical visibility minimized over measurements is denoted as v_crit^min and corresponds to the maximal strength of non-locality 𝒮^max. For values below v_crit^min the probability of violation reduces to 0. The strength of non-locality 𝒮, does not provide complete information about the non-local properties of the state, primarily as it only quantifies for which v the probability of violation 𝒫_V reduces to 0 and not how 𝒫_V depends on variations in v. This information is captured, at least partially, via the average strength of non-locality 𝒮̅, given as the expectation value 𝒮̅ = ∫_0^𝒮^max𝒮 g(𝒮) d𝒮, where g(𝒮) is the probability density over values of 𝒮 when choosing measurement directions according to dΩ, normalized such that ∫_0^𝒮^max g(S)d𝒮 = 1. Examples of g(𝒮) are presented in Ref. <cit.>. As the number of settings per party tends to infinity, 𝒮 becomes more and more independent of the choice of measurement directions Ω and g(𝒮) →δ(𝒮^max-𝒮) <cit.>. In this case 𝒮̅ and 𝒮_max become equivalent. It is also possible to define a similar quantifier for states, the so-called “trace-weighted nonlocality strength” <cit.>, which is based on the trace distance of Bell correlations <cit.> instead of the strength of non-locality of Bell correlations. Notably, just as the probability of violation, both strengths are invariant under LU. Furthermore, it can be shown that both are strictly positive for all pure entangled states in a setup with at least two binary-outcome measurements per party. The proof is based on the fact that any pure multipartite entangled state violates some two-setting two-outcome Bell inequality <cit.>. §.§ Generalized Bell-type inequalities The Bell scenario can be generalised to a larger number of observers n, measurement settings m, and a larger dimension of the local Hilbert space d. There are many examples of Bell-type inequalities for many different experimental situations <cit.>. However, full sets of tight Bell inequalities (Bell-Pitovsky polytopes) are known only in a few cases <cit.>. Moreover, the analytical determination of probabilities of violation for other states and other Bell inequalities is challenging due to the nature of the integral in Eq. (<ref>). A numerical method based on linear programming can, however, successfully address this problem. It is known (see, e.g. Ref. <cit.>) that there exists a Bell-local explanation of correlations if and only if there exists a joint probability distribution p_BL over the results of all settings for all observers, from which the experimental probabilities can be predicted via marginalisation. Thus, for a given set of measurement settings the numerical search for this probability distribution, under the constraint that it generates the experimental correlations, is equivalent to testing all possible Bell-type inequalities for these particular observables. If such a distribution can be found, it implies that the correlations are classical and thus no inequality can be violated <cit.>. Conversely, if the joint distribution cannot be found, the correlations are shown to be Bell non-local. In the numerical computations, this is of course true up to the numerical precision. Note that this method does neither yield nor require knowledge of the exact form of Bell-type inequalities for a given experimental situation (which are often unknown), but nevertheless effectively allows to test all of the conceivable inequalities. By sampling the measurement settings with sufficient statistics according to the measure dΩ, the numerical method allows approximating the unconditioned 𝒫_V, which contrary to 𝒫^I_V is independent of the choice of a specific family of Bell-type inequalities I. At the same time, the method is also extremely well suited to determine and maximise the strength of non-locality 𝒮^max. This numerical approach was used for the first time to the GHZ states and two measurement settings in Ref. <cit.> and comprehensively for various qubit and qutrit states and multiple measurement settings in Ref. <cit.>. It can be straightforwardly generalised to a larger number of particles, measurement settings, and sub-system dimensions, see, e.g. Ref. <cit.>. While the computational approach is equivalent to testing all possible Bell inequalities, it should be noted that already a single family of inequalities is sufficient to detect non-locality in most cases <cit.>. This optimal family with 𝒫^opt_V is obtained by extending the CHSH inequality to more observers and settings, e.g. via a lifting procedure <cit.>. It turns out that for sufficiently many parties and settings 𝒫_V ≈𝒫^opt_V, as for example already for the three-qubit W state, 𝒫_V = 54.893%, while 𝒫^opt_V = 50.858%. §.§ Properties of the probability of violation In the following, we briefly present the most important properties of the probability of violation and the notion of typicality of non-locality. Dependence on number of measurement settings.—It was shown in Ref. <cit.> that the probability of violation increases rapidly with the number of measurement settings per party. For the GHZ state, already for two parties and five settings, 𝒫_V is close to 100%. This fact can be explained in two ways. Firstly, from a statistical point of view, as the number of settings increases, so does the chance of finding suitable pairs of settings. Secondly, new inequalities emerge involving all the additional measurement settings. Additionally, in Ref. <cit.> it is shown in general that for any pure bipartite entangled state, the probability of violation tends to unity when the number of measurement settings tends to infinity. Multiplicativity of probability of violation.—The probability of violation is not additive, but rather multiplicative <cit.>, in the sense that the probability 𝒫_BL = 1 - 𝒫_V to choose settings allowing a Bell-local explanation is multiplicative over subsystems with 𝒫_BL (ϱ_1 ⊗ϱ_2) = 𝒫_BL (ϱ_1) 𝒫_BL (ϱ_2). This immediately hints at the increase of 𝒫_V with system size. Consider for example a product of n two-qubit GHZ states GHZ_2^⊗ n. Using the result 𝒫_V(GHZ_2) = 2(π-3) obtained above, the probability of violation becomes 𝒫_V( GHZ_2^⊗ n) = 1 - 𝒫_BL( GHZ_2^⊗ n) = 1 - (𝒫_BL( GHZ_2))^n = 1 - (7-2π)^n, which converges to unity for large n. Maximal probability of violation.—It was shown that the n-particle GHZ state maximises the probability of violation for a special set of two-outcome inequalities containing only full n-particle correlation functions when n is even <cit.>. More generally, if we are not restricted to any particular type of inequality and consider the full set of possible two-setting Bell inequalities (in practice only realisable via the numerical method discussed in Sec. <ref>) it turns out that the GHZ states do not exhibit the highest 𝒫_V. Note that due to the numerical nature of the considerations, it is difficult to explicitly find the state which maximises 𝒫_V. Maximal non-locality and maximal entanglement.— One commonly used measure of non-locality, namely the robustness to white noise, was already introduced in Eq. (<ref>). However, it has many disadvantages. One flagship example is the discrepancy between the maximal violation of the two-setting d-outcome CGLMP inequality <cit.> and its violation by the d × d maximally entangled state. It turns out that some asymmetric states tolerate a greater admixture of white noise while remaining non-local than the maximally entangled states. For the probability of violation, however, the value is maximised by the maximally entangled states and the anomaly disappears at least for d ≤ 10 <cit.>. In addition, it is proven in Ref. <cit.> that for two qubits in a pure state the probability of violation for bipartite full-correlation Bell inequalities is an entanglement monotone. Witness for genuine multipartite entanglement.—The probability of violation can also serve as a witness of genuine multipartite entanglement <cit.>. For example, for n=3 and two settings per party, 𝒫_V > 2(π - 3) certifies that the state is truly multipartite entangled <cit.>. For a larger number of particles and a larger number of settings, similar criteria can also be formulated. However, so far they are only based on numerics, see, e.g. Ref. <cit.>. Typicality of non-locality.—The notion of probability of violation also allows to address the question how typical non-locality is not only under variation of observables but also of states. In Ref. <cit.>, it was shown that the typical n-qubit states present in many quantum information problems for n ≥ 5 exhibit Bell non-local correlations for almost any choice of observables (𝒫_V >99.99%). In Ref. <cit.>, through a sampling of the whole state space of pure states it was demonstrated that for a random pure state the probability of violation strongly increases with the number of qubits and already for N ≥ 6 it is greater than 99.99%. §.§ Average correlation An alternative to considering the probability of violation 𝒫_V has been proposed in Ref. <cit.>. It builds on the intuition that for example the CHSH inequality is violated if the correlations E(𝐮_a,𝐮_b) are sufficiently high for at least two pairs of different settings. Thus, instead of testing any particular Bell-type inequality, the average correlation Σ is calculated with Σ = ∫_ΩdΩ |E(𝐮_a,𝐮_b)|, i.e. it corresponds to the first moment of the modulus of the correlation function. In Ref. <cit.>, it is conjectured and confirmed via numerical simulation that for bipartite systems Σ > √(2) / 4 implies Bell non-locality and Σ < 1/4 excludes it, i.e. a state with Σ < 1/4 cannot violate any CHSH inequality. Compared to the testing of Bell inequalities this method has the advantage that it can be applied even in scenarios with fluctuating randomness where previous settings cannot be revisited. §.§ Genuine multipartite non-locality (GMNL) The notion of Bell-locality readily extends from two to an arbitrary number of parties. In a Bell-local scenario, there are only classical correlations between any of the parties and it is not possible to violate any Bell-type inequality for any partition of the system. The set of such multipartite Bell-local correlations is commonly denoted by ℒ. Just as with the concept of genuine multipartite entanglement, however, the notion of Bell-nonlocality can be refined to genuine multipartite non-locality (GMNL). It allows to distinguish cases where there are only some Bell non-local correlations in the system, from cases where all parties share suitable correlations. Starting from the bi-partite definition, there are several options how to define GMNL, the first of which was proposed by Svetlichny <cit.>. In a straightforward formal analogy to the definition of Bell locality, it defines non-GMNL tripartite correlations as those which admit the decomposition P(a,b,c|x,y,z) = ∑_λ q_λ P(a,b|x,y,λ)P(c|z,λ) + ∑_μ q_μ P(a,c|x,z,μ)P(b|y,μ) + ∑_ν q_ν P(b,c|y,z,ν)P(a|x,ν), where λ, μ, ν are shared parameters used to correlate measurement outcomes, the outcome of Alice when she chooses setting x is denoted by a and similarly pairs y, b and z, c denote the settings and outcomes of Bob and Charlie. Note that in each term the statistics of measurement results of one party conditionally factors out from the statistics of the other parties. In general, for n observers, the set of these correlations is denoted by 𝒮_2, where the subscript indicates factorisability into at least two parts. As pointed out in <cit.> this definition is rather strict since no additional assumptions are made about the joint probability distributions such as P(ab|xyλ). In particular they can correspond to signalling correlations, which allow for superluminal communication between parties. Note that the signalling is only a formal consequence of the decomposition since P(abc|xyz) of course does not contain any signalling. This potentially unphysical nature of the decomposition motivates a different definition of non-GMNL correlations <cit.>, namely those that can be decomposed as Eq. (<ref>), but with the additional requirement that all elements of the decomposition need to remain non-signalling, i.e. physical. The corresponding set of non-GMNL correlations is denoted as 𝒩𝒮_2. For any n ≥ 3, this condition is strictly stronger than just requiring non Bell-local correlations, and strictly weaker than Svetlichny's definition and therefore gives rise to the following strict inclusions ℒ⊊𝒩𝒮_2⊊𝒮_2. Based on these definitions, it is now straightforward to define the probability of violation 𝒫_V, just as in the case of standard multipartite non-locality, as 𝒫_V(ϱ,S) = ∫ f(ϱ, Ω) dΩ, where f(ϱ, Ω) = 1, if settings lead to correlations outside the set S, 0, otherwise. The case of S = ℒ corresponds to the discussion above and S = 𝒩𝒮_2 as well as S = 𝒮_2 correspond to the probability of violation given the alternative definitions of GMNL, respectively. Note that just as in the standard case, for each definition of GMNL either specific families I of inequalities can be distinguished or a general probability for any type of inequality can be considered. A numerical study of these probabilities has been performed for specific inequalities I in <cit.>. For each setup (defined by the multipartite state ϱ, the number of settings per party m, and the set 𝒮), we can single out a dominant inequality I that gives the best lower bound value of 𝒫_V. In Ref. <cit.>, the probability of violation 𝒫_V was investigated for three-party GHZ and W states with increasing number of measurement settings per party for the different tripartite scenarios (for results, see Tab. <ref> for m=2, and see Ref. <cit.> for m>2). The following conclusions about 𝒫_V can be drawn based on Tab. <ref> and on additional numerical data in the tables of Refs. <cit.>. Importantly, 𝒫_V steadily increases with the number of measurement settings m <cit.>. In particular, the probability of violating the ℒ and 𝒩𝒮_2 conditions is greater than 99.9% in the respective cases m>3 and m>5, for both W_3 and GHZ_3 states. For 𝒮_2, however, this percentage is much smaller, especially for the W_3 state. This trend is observed even up to m=6. The numerical computations in Refs. <cit.> also demonstrate that in each of the three discussed tripartite scenarios, one can clearly distinguish a so-called dominant inequality, i.e. a facet of the respective convex sets ℒ, 𝒩𝒮_2, 𝒮_2. A dominant inequality defines a family of equivalent inequalities which are most often violated for a given Bell-type experiment. As we can see in Tab. <ref>, the estimated 𝒫_V^I is surprisingly close to the 𝒫_V value for almost all two-setting (m=2) scenarios. In fact, the same tendency is observed for larger m as well <cit.>. In particular, a very small difference between the 𝒫_V^I and 𝒫_V can be observed for ℒ and 𝒩𝒮_2 scenarios up to m=6. The worst match between the two probabilites is found in the 𝒮_2 scenario for the W_3 state. §.§ Guaranteed violation for partial randomness It was observed that partial local alignment can lead to a significant increase in the probability of violation, culminating in the guaranteed violation, i.e. cases for which 𝒫_V = 100% (apart from trivial cases when, e.g. all measurement directions are the same). Under suitable conditions, it is robust against noise and experimental deficiencies <cit.>. §.§.§ Locally orthogonal settings As mentioned above, for the case of two-qubit maximally entangled state and the CHSH inequality, when observers cannot align their measurement settings (neither between them or locally) and thus choose them randomly, the probability of violation of the CHSH inequality is approximately 28.3%. If, however, local calibration is possible such that each party can choose two orthogonal settings the probability increases to 41.3% <cit.>. For an n-qubit GHZ state and two measurement settings per site, the 𝒫_V increases and reaches a value greater than 99.9% for n ≥ 4. §.§.§ Local measurement triads If the number of orthogonal settings is further increased to three, i.e. we allow orthogonal triads of measurement directions, the probability of violation effectively reaches 100% already for two parties. The only case when a violation would not occur is if the triads would happen to be perfectly aligned <cit.>. For multipartite Bell scenarios, guaranteed violation was shown numerically in Refs. <cit.> for up to 8 qubits. In addition, the numerical study in Ref. <cit.> suggests that the multipartite correlations arising from the randomly generated triads certify with almost certainty for n=3 and n=4 parties the existence of genuine multipartite entanglement possessed by the GHZ_n state. That is, even in the absence of an aligned reference frame, a device-independent way of certifying genuine multipartite entanglement <cit.> is possible. In particular, for the specific cases of three and four parties, results, which were obtained from semidefinite programming, suggest that these randomly generated correlations always reveal, even in the presence of a non-negligible amount of white noise, the genuine multipartite entanglement possessed by these states <cit.>. In other words, provided local calibration can be carried out to good precision, a device-independent certification of the genuine multipartite entanglement contained in these states can, in principle, also be carried out in an experimental situation without sharing a global reference frame. §.§.§ MUBs in higher dimensional systems The pairs of orthogonal measurements as well as the triads are instances of mutually unbiased bases (MUB) for qubits. This is generalised in Ref. <cit.>, where the probability of violation is tested on maximally entangled bipartite d× d systems using local MUBs. Note that MUBs are known to provide maximal quantum violation of certain Bell inequalities, including the CHSH inequality (see, e.g. Ref. <cit.>). Using higher dimensional random MUB measurements in the case of d=3 and d=4, near guaranteed Bell violation was obtained <cit.>. §.§.§ Planar measurements with single aligned direction Apart from local calibration there is even less randomness when the observers are allowed to share one measurement direction while leaving the other still randomly unaligned. If in this case the observers choose locally orthogonal measurement settings in the shared plane, any of the CHSH inequalities is always violated <cit.> (except for a set of zero measure). In the case of n qubits and the set of MABK inequalities, it has been shown that the probability of violating any of the MABK inequalities by a factor of ϵ√(2) is equal to 𝒫_V^MABK = (4/π) arccosϵ. This leads to the conclusion that there is a guaranteed (albeit sometimes only infinitesimal) violation of the MABK inequality. §.§ Experimental demonstrations Experiments with entangled photons serve as the prototype for studies in the context of non-local correlations with randomised measurements <cit.>. Note that the scenario with partial alignment where two parties share only a single measurement direction is of immediate practical relevance in photonic systems. If, for example, the photons are transmitted via polarisation-maintaining fibers the parties all share a linear polarisation basis, with random phase rotations between the two basis states. Similarly, for photons distributed along free-space links between rotating objects such as satellites the linear polarisation is maintained, but the relative orientation of the linear polarisation bases is unknown. The two-qubit experiments of Refs. <cit.> both use CHSH inequalities to show the presence of Bell correlations. Three-qubit experiments have more varying approaches, where in Ref. <cit.> the Svetlichny inequality is employed to show GMNL and in Ref. <cit.> a representative type of Bell-inequality and a representative so called “hybrid”-inequality are considered, where the latter is less strict allowing for some Bell-type correlations between subsets of parties. Finally, Ref. <cit.> tackles the scenario of entanglement swapping and tests the corresponding bilocality inequality. All these inequalities are based on observing two measurement settings per party and although some experimental schemes involve more settings this does not lead to different inequalities, but rather allows to consider several versions of each inequality in post processing by differently combining the results for different choices of pairs of settings at each observer. The inequality with the strongest violation is then chosen as the result. Since the observation of generalised Bell correlations requires to revisit the same settings, randomness of settings has to be introduced in a controlled or at least reproducible manner. Most of the experiments use polarisation entanglement, and therefore measurements are realised by polarisation-dependent beam splitting <cit.> or polarisation filtering <cit.> preceded by controllable waveplates, which can apply all possible unitary transformations allowing to access any desired direction on the Bloch sphere. In the somewhat different scenario of Ref. <cit.>, additionally a liquid crystal device is employed, which adds unknown rotations between the two parts of the experiment. Even in Ref. <cit.>, where also the path degree is employed, the final measurement is done as a polarisation measurement after that path state has been transferred to polarisation. In Ref. <cit.>, a significantly different scheme is applied, where the path entanglement is measured by sending each photon into an interferometer realised using integrated optics. In this platform, a phase shift can be induced both before and inside of the interferometer, which again allows to access all possible measurement directions for the Bloch sphere of the path qubit. These path-state rotations in integrated optics are much less controllable than in the case of polarisation in free space and as such the technical approach from <cit.> itself motivates an approach with random measurements. While in principle all theoretical results are based on Haar-randomness, not all experiments fully realise this requirement. For experiments based on polarisation measurements, full Haar-randomness can in principle be easily achieved by appropriately choosing the distribution of settings for the waveplates. However, Refs. <cit.> and <cit.> remain unclear whether a true Haar-random sampling was actually implemented. In Ref. <cit.>, while the settings do not correspond to a Haar-random sampling, appropriate statistical weights are introduced when calculating the probability to violate the CHSH inequality. Also, it is acknowledged in Ref. <cit.> that the evenly distributed choice of settings for the waveplates does not translate into a Haar-random sampling, the consequences of which, however, are not discussed further by the authors. In contrast, the authors of Ref. <cit.> admit explicitly that their measurement scheme is somewhat biased, but conclude that even without any correction their results correspond to the theoretical predictions sufficiently well. Finally, several of the experiments also address the question how typical experimental imperfections affect their results. The main contributions affecting the recorded correlations are reduced fidelity of the entangled states and statistical noise due to the limited number of state copies often encountered in experiments with multiple photons. §.§.§ One axis aligned The least random scenario, where each local frame is calibrated and additionally the parties share a common reference direction is addressed in Refs. <cit.> and <cit.>, for two and three qubits, respectively. In Ref. <cit.> the choice of two locally orthogonal settings achieves the theoretical prediction of guaranteed violation in almost all trials, where statistical noise and an imperfect state can fully explain the small fraction of cases where it was not achieved. For the three-qubit GHZ state in Ref. <cit.> triples of equally distributed measurements on the plane of the Bloch sphere are performed (denoted as “Y-Shaped triads”), after which all combinations of possible Svetlichny inequalities are considered. The higher number of chosen settings increases the probability to measure in directions sufficiently suited to observe GMNL. Also here the theoretical prediction of a certain violation is confirmed by the experimental results. §.§.§ Only local calibration - no alignment between parties For the next level of randomness where any relative alignment between the parties is absent in Ref. <cit.> the prediction of a guaranteed violation is tested and well confirmed up to experimental imperfections. Additionally, it is explored how the scheme of using two mutually unbiased settings fails to achieve a violation with certainty as the two parties gradually loose the alignment of one common direction, see Fig. <ref>a. However, even in the case of total misalignment, still a violation with a probability of 42% is obtained, fitting nicely with the theoretical prediction. Conversely, in Ref. <cit.> the scheme with triplets of mutually unbiased settings at each location (“orthogonal measurement triads”) yields a certain violation when accounting for experimental imperfections, even without any alignment. The authors additionally investigate how this probability is affected by state deterioration by artificially introducing a temporal delay in the entangling CNOT process, which reduces the coherence of the state. As shown in Fig. <ref>b, there exists a relatively large range of reduced coherence where a violation with almost certainty is still achieved. In the three-qubit experiment of Ref. <cit.>, the usage of tetrahedral bases at each location achieves a violation with a probability of roughly 58%, as shown in Fig. <ref>c, which seems somewhat far away from the theoretical prediction of 88%. However, also this deviation is explained very well by reduced state fidelity and high statistical noise not unusual for a three-photon experiment. Finally, the entanglement swapping scenario from Ref. <cit.> considers three parties, A, B, and C, where C establishes entanglement between A and B via a projective measurement on two qubits, each of which is entangled with a qubit at A and B respectively. While there is full alignment between the measurements at A and the A-side of C (C^A), both the other measurement at C (C^B) and the measurement at B itself have randomly rotated reference frames. As in Ref. <cit.>, the experimental scheme of Ref. <cit.> involves a measurement of orthogonal triads for the two parties with randomly rotated frames C^B and B. As shown in Fig. <ref>d, the theoretical prediction of a certain violation of the corresponding bilocality inequality is confirmed very well. §.§.§ Complete lack of calibration The most extreme case of randomness, when also no information about the relative orientation of local settings is present, is considered in Ref. <cit.> for two qubits and in Ref. <cit.> for the three-qubit GHZ state. The approach of Ref. <cit.> is to simply measure a certain number m of different settings per party, where a higher number of trials simply increases the probability that a combination of settings allowing a violation will occur. Indeed, the authors show that with as little as four or five completely randomly chosen settings per party, violations will be observed with close-to-unit probability. In Ref. <cit.>, a representative Bell-inequality and a less strict hybrid inequality, which allows for some nonlocality in the model, are experimentally tested with trials that always involve two settings per party. The authors verify that the theoretical predictions of 61.1% probability of violation for the Bell inequality and 5.7% for the hybrid inequality are consistent with the experimental results within the margins of error. § CONCLUSIONS We reviewed foundational aspects of randomised measurements and their applications in quantum information. It was shown in detail how quantum entanglement is characterised and witnessed via statistical properties of correlations averaged over random measurement settings. This includes higher-dimensional entanglement, bound entanglement, different classes of multipartite entanglement or the number of entangled particles. The methods described are systematic with the growing complexity of analysis revealing more refined properties of entangled states. Whenever possible, theoretical criteria were discussed in parallel with their experimental implementations taking into account the effects of finite statistics. Quantum designs were also reviewed as means to simplify the implementation of averaging over random measurements. These methods return many other characteristics of a quantum state which were illustrated with estimations of non-linear functions of density matrices including purity, fidelity, many-body topological invariants and local unitary invariants of correlation tensor, among others. Our last focus was on the role randomised measurement settings play in violation of Bell inequalities. In particular, we covered the typicality of violation, related quantifiers of non-classicality, and the impact of restricted randomness in both bipartite and multipartite scenarios. All these results clearly demonstrate that randomised measurements are powerful theoretical and experimental tools. At the same time, they are not yet fully explored and exploited. To conclude we would like to mention a few related open problems. The field of entanglement detection via randomised measurements has evolved towards using higher moments of the correlation distribution showing steady progress in the number and subtleties of detected entangled states. It is therefore natural to ask if given all the moments or a joint probability distribution of averaged correlations for all marginals, entanglement of any mixed state could be witnessed in a reference-frame-independent way. A similar problem is whether general LU invariance can be decided on the basis of randomised measurements only. For two qubits this is indeed possible, as we reviewed such reconstructions of Makhlin invariants, but the problem is open for higher dimensions and multipartite systems. In both situations, the complete sets of invariants are unknown and identified invariants have not been phrased explicitly in terms of randomised measurements. Moving on from the informational properties of quantum states, randomised measurements should be applied to the characterisation of quantum processes and are likely useful in other domains. A natural field is thermodynamics where information-theoretical tools have already been applied very successfully. On the practical side, Haar randomness is the requirement of all discussed schemes. It therefore becomes important to develop methods confirming the degree of randomness realised in experiments and to construct efficient ways of generating high-dimensional Haar random unitaries. Theoretical protocols should be studied for other than Haar-random choices of settings. Finally, it is intriguing to see random measurements applied to study k-body marginal properties and to measure them on macroscopic quantum samples. Altogether we hope to stimulate further progress in this domain that will lead to new fundamental discoveries and practical protocols with randomised measurements. § DECLARATION OF COMPETING INTEREST The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. § ACKNOWLEDGEMENTS We thank Jan L. Bönsel, Borivoje Dakić, Qiongyi He, Marcus Huber, Daniel E. Jones, Andreas Ketterer, Waldemar Kłobus, Brian T. Kirby, Shuheng Liu, Simon Morelli, Stefan Nimmrichter, Peter J. Shadbolt, Shravan Shravan, Jens Siewert, Géza Tóth, Minh Cong Tran, Julio I. de Vicente, Giuseppe Vitagliano, Harald Weinfurter, Nikolai Wyderka, Xiao-Dong Yu, Yuan-Yuan Zhao, for discussions and collaborations on the subject. This work has been supported by the DAAD, the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation, project numbers 447948357 and 440958198), from the DFG under Germany’s Excellence Strategy – EXC-2111 – 390814868 (Munich Center for Quantum Science and Technology), the Sino-German Center for Research Promotion (Project M-0294), the ERC (Consolidator Grant 683107/TempoQ), the German Ministry of Education and Research (Project QuKuK, BMBF Grant No. 16KIS1618K and 16KIS1621), the National Science Centre (NCN, Poland) within the Preludium Bis project (Grant No. 2021/43/O/ST2/02679), the EU (QuantERA eDICT) and the National Research, Development and Innovation Office NKFIH (No. 2019-2.1.7-ERA-NET-2020-00003). 100 url<#>1urlprefixURL href#1#2#2 #1#1munroe_photon_1995 M. Munroe, D. Boggavarapu, M. E. Anderson, M. G. Raymer, Photon-number statistics from the phase-averaged quadrature-field distribution: Theory and ultrafast measurement, Phys. Rev. A 52 (1995) R924–R927. https://doi.org/10.1103/PhysRevA.52.R924 doi:10.1103/PhysRevA.52.R924. two-photon_beenakker_2009 C. W. J. Beenakker, J. W. F. Venderbos, M. P. van Exter, Two-photon speckle as a probe of multi-dimensional entanglement, Phys. Rev. Lett. 102 (2009) 193601. https://doi.org/10.1103/PhysRevLett.102.193601 doi:10.1103/PhysRevLett.102.193601. liang_nonclassical_2010 Y.-C. Liang, N. Harrigan, S. D. Bartlett, T. Rudolph, Nonclassical Correlations from Randomly Chosen Local Measurements, Phys. Rev. Lett. 104 (5) (2010) 050401. https://doi.org/10.1103/PhysRevLett.104.050401 doi:10.1103/PhysRevLett.104.050401. laing_reference_2010 A. Laing, V. Scarani, J. G. Rarity, J. L. O'Brien, Reference-frame-independent quantum key distribution, Phys. Rev. A 82 (2010) 012304. https://doi.org/10.1103/PhysRevA.82.012304 doi:10.1103/PhysRevA.82.012304. peeters_observation_2010 W. H. Peeters, J. J. D. Moerman, M. P. van Exter, Observation of two-photon speckle patterns, Phys. Rev. Lett. 104 (2010) 173601. https://doi.org/10.1103/PhysRevLett.104.173601 doi:10.1103/PhysRevLett.104.173601. wallman_generating_2011 J. J. Wallman, Y.-C. Liang, S. D. Bartlett, Generating nonclassical correlations without fully aligning measurements, Phys. Rev. A 83 (2) (2011) 022110. https://doi.org/10.1103/PhysRevA.83.022110 doi:10.1103/PhysRevA.83.022110. van_enk_measuring_2012 S. J. van Enk, C. W. J. Beenakker, Measuring Tr(rhoˆn) on Single Copies of rho Using Random Measurements, Phys. Rev. Lett. 108 (11) (2012) 110503. https://doi.org/10.1103/PhysRevLett.108.110503 doi:10.1103/PhysRevLett.108.110503. shadbolt_guaranteed_2012 P. Shadbolt, T. Vértesi, Y.-C. Liang, C. Branciard, N. Brunner, J. L. O'Brien, Guaranteed violation of a Bell inequality without aligned reference frames or calibrated devices, Scientific Reports 2 (1) (2012) 470. https://doi.org/10.1038/srep00470 doi:10.1038/srep00470. laskowski_experimental_2012 W. Laskowski, D. Richart, C. Schwemmer, T. Paterek, H. Weinfurter, Experimental Schmidt Decomposition and State Independent Entanglement Detection, Phys. Rev. Lett. 108 (2012) 240501. https://doi.org/10.1103/PhysRevLett.108.240501 doi:10.1103/PhysRevLett.108.240501. palsson_experimentally_2012 M. S. Palsson, J. J. Wallman, A. J. Bennet, G. J. Pryde, Experimentally demonstrating reference-frame-independent violations of Bell inequalities, Phys. Rev. A 86 (3) (2012) 032322. https://doi.org/10.1103/PhysRevA.86.032322 doi:10.1103/PhysRevA.86.032322. laskowski_optimized_2013 W. Laskowski, C. Schwemmer, D. Richart, L. Knips, T. Paterek, H. Weinfurter, https://doi.org/10.1103/physreva.88.022327Optimized state-independent entanglement detection based on a geometrical threshold criterion, Phys. Rev. A 88 (2) (Aug. 2013). https://doi.org/10.1103/physreva.88.022327 doi:10.1103/physreva.88.022327. <https://doi.org/10.1103/physreva.88.022327> elben_randomized_2022 A. Elben, S. T. Flammia, H.-Y. Huang, R. Kueng, J. Preskill, B. Vermersch, P. Zoller, The randomized measurement toolbox, Nature Reviews Physics 5 (1) (2022) 9–24. https://doi.org/10.1038/s42254-022-00535-2 doi:10.1038/s42254-022-00535-2. bartlett_refFrame_2003 S. D. Bartlett, T. Rudolph, R. W. Spekkens, Classical and quantum communication without a shared reference frame, Phys. Rev. Lett. 91 (2003) 027901. https://doi.org/10.1103/PhysRevLett.91.027901 doi:10.1103/PhysRevLett.91.027901. ohliger_efficient_2012 M. Ohliger, V. Nesme, J. Eisert, Efficient and feasible state tomography of quantum many-body systems, New J. Phys. 15 (1) (2013) 015024. https://doi.org/10.1088/1367-2630/15/1/015024 doi:10.1088/1367-2630/15/1/015024. knips_moment_2020 L. Knips, A moment for random measurements, Quantum Views 4 (2020) 47. https://doi.org/10.22331/qv-2020-11-19-47 doi:10.22331/qv-2020-11-19-47. tran_quantum_2015 M. C. Tran, B. Dakić, F. Arnault, W. Laskowski, T. Paterek, Quantum entanglement from random measurements, Phys. Rev. A 92 (5) (2015) 050301. https://doi.org/10.1103/PhysRevA.92.050301 doi:10.1103/PhysRevA.92.050301. tran_correlations_2016 M. C. Tran, B. Dakić, W. Laskowski, T. Paterek, Correlations between outcomes of random measurements, Phys. Rev. A 94 (4) (2016) 042302. https://doi.org/10.1103/PhysRevA.94.042302 doi:10.1103/PhysRevA.94.042302. ketterer_characterizing_2019 A. Ketterer, N. Wyderka, O. Gühne, Characterizing Multipartite Entanglement with Moments of Random Correlations, Phys. Rev. Lett. 122 (12) (2019) 120505. https://doi.org/10.1103/PhysRevLett.122.120505 doi:10.1103/PhysRevLett.122.120505. delsarte_spherical_1991 P. Delsarte, J.-M. Goethals, J. J. Seidel, Spherical codes and designs, in: Geometry and Combinatorics, Elsevier, 1991, pp. 68–93. colbourn_crc_2010 C. J. Colbourn, CRC handbook of combinatorial designs, CRC press, 2010. wyderka_learning_2020 N. Wyderka, Learning from correlations: What parts of quantum states tell about the whole, Phd thesis, Universität Siegen, Siegen (2020). seymour_averaging_1984 P. D. Seymour, T. Zaslavsky, Averaging sets: A generalization of mean values and spherical designs, Advances in Mathematics 52 (3) (1984) 213–240. https://doi.org/10.1016/0001-8708(84)90022-7 doi:10.1016/0001-8708(84)90022-7. nielsen_quantum_2011 M. A. Nielsen, I. L. Chuang, Quantum Computation and Quantum Information: 10th Anniversary Edition, Cambridge University Press, 2010. https://doi.org/10.1017/CBO9780511976667 doi:10.1017/CBO9780511976667. kurzyski_correlation_2011 P. Kurzy ńński, T. Paterek, R. Ramanathan, W. Laskowski, D. Kaszlikowski, Correlation Complementarity Yields Bell Monogamy Relations, Phys. Rev. Lett. 106 (2011) 180402. https://doi.org/10.1103/PhysRevLett.106.180402 doi:10.1103/PhysRevLett.106.180402. gamel_entangled_2016 O. Gamel, Entangled Bloch spheres: Bloch matrix and two-qubit state space, Phys. Rev. A 93 (6) (2016) 062320. https://doi.org/10.1103/PhysRevA.93.062320 doi:10.1103/PhysRevA.93.062320. wyderka_characterizing_2020 N. Wyderka, O. Gühne, Characterizing quantum states via sector lengths, J. Phys. A 53 (34) (2020) 345302. https://doi.org/10.1088/1751-8121/ab7f0a doi:10.1088/1751-8121/ab7f0a. morelli_correlation_2023 S. Morelli, C. Eltschka, M. Huber, J. Siewert, Correlation constraints and the bloch geometry of two qubits (2023). http://arxiv.org/abs/2303.11400 arXiv:2303.11400. kimura_bloch_2003 G. Kimura, The bloch vector for n-level systems, Phys. Lett. A 314 (5) (2003) 339–349. https://doi.org/10.1016/S0375-9601(03)00941-1 doi:10.1016/S0375-9601(03)00941-1. gell-mann_symmetries_1962 M. Gell-Mann, Symmetries of Baryons and Mesons, Phys. Rev. 125 (1962) 1067–1084. https://doi.org/10.1103/PhysRev.125.1067 doi:10.1103/PhysRev.125.1067. schlienz_description_1995 J. Schlienz, G. Mahler, Description of entanglement, Phys. Rev. A 52 (1995) 4396–4404. https://doi.org/10.1103/PhysRevA.52.4396 doi:10.1103/PhysRevA.52.4396. bertlmann_bloch_2008 R. A. Bertlmann, P. Krammer, Bloch vectors for qudits, J. Phys. A 41 (23) (2008) 235303. https://doi.org/10.1088/1751-8113/41/23/235303 doi:10.1088/1751-8113/41/23/235303. asadian_heisenberg-weyl_2016 A. Asadian, P. Erker, M. Huber, C. Klöckl, Heisenberg-Weyl Observables: Bloch vectors in phase space, Phys. Rev. A 94 (1) (2016) 010301. https://doi.org/10.1103/PhysRevA.94.010301 doi:10.1103/PhysRevA.94.010301. aschauer_local_2003 H. Aschauer, J. Calsamiglia, M. Hein, H. J. Briegel, Local invariants for multi-partite entangled states allowing for a simple entanglement criterion, arXiv preprint quant-ph/0306048 (2003). eltschka_maximum_2020 C. Eltschka, J. Siewert, Maximum N-body correlations do not in general imply genuine multipartite entanglement, Quantum 4 (2020) 229. https://doi.org/10.22331/q-2020-02-10-229 doi:10.22331/q-2020-02-10-229. miller_small_2019 D. Miller, Small quantum networks in the qudit stabilizer formalism (2019). http://arxiv.org/abs/1910.09551 arXiv:1910.09551. kaszlikowski_quantum_2008 D. Kaszlikowski, A. Sen(De), U. Sen, V. Vedral, A. Winter, Quantum correlation without classical correlations, Phys. Rev. Lett. 101 (2008) 070502. https://doi.org/10.1103/PhysRevLett.101.070502 doi:10.1103/PhysRevLett.101.070502. laskowski_incompatible_2012 W. Laskowski, M. Markiewicz, T. Paterek, M. Wieśniak, Incompatible local hidden-variable models of quantum correlations, Phys. Rev. A 86 (3) (September 2012). https://doi.org/10.1103/physreva.86.032105 doi:10.1103/physreva.86.032105. schwemmer_genuine_2015 C. Schwemmer, L. Knips, M. C. Tran, A. de Rosier, W. Laskowski, T. Paterek, H. Weinfurter, Genuine Multipartite Entanglement without Multipartite Correlations, Phys. Rev. Lett. 114 (18) (2015) 180501. https://doi.org/10.1103/PhysRevLett.114.180501 doi:10.1103/PhysRevLett.114.180501. tran_genuine_2017 M. C. Tran, M. Zuppardo, A. de Rosier, L. Knips, W. Laskowski, T. Paterek, H. Weinfurter, Genuine N-partite entanglement without N-partite correlation functions, Phys. Rev. A 95 (6) (2017) 062331. https://doi.org/10.1103/PhysRevA.95.062331 doi:10.1103/PhysRevA.95.062331. klobus_higher_2019 W. Kłobus, W. Laskowski, T. Paterek, M. Wieśniak, H. Weinfurter, Higher dimensional entanglement without correlations, The European Physical Journal D 73 (2) (2019) 29. https://doi.org/10.1140/epjd/e2018-90446-6 doi:10.1140/epjd/e2018-90446-6. makhlin_nonlocal_2002 Y. Makhlin, Nonlocal properties of two-qubit gates and mixed states, and the optimization of quantum computations, Quantum Information Processing 1 (4) (2002) 243–252. https://doi.org/10.1023/A:1022144002391 doi:10.1023/A:1022144002391. horodecki_quantumentropy_1996 R. Horodecki, P. Horodecki, M. Horodecki, Quantum α-entropy inequalities: independent condition for local realism?, Phys. Lett. A 210 (6) (1996) 377–381. https://doi.org/10.1016/0375-9601(95)00930-2 doi:10.1016/0375-9601(95)00930-2. peres_separability_1996 A. Peres, Separability Criterion for Density Matrices, Phys. Rev. Lett. 77 (8) (1996) 1413–1415. https://doi.org/10.1103/PhysRevLett.77.1413 doi:10.1103/PhysRevLett.77.1413. horodecki_separability_1996 M. Horodecki, P. Horodecki, R. Horodecki, Separability of mixed states: necessary and sufficient conditions, Phys. Lett. A 223 (1-2) (1996) 1–8. https://doi.org/10.1016/S0375-9601(96)00706-2 doi:10.1016/S0375-9601(96)00706-2. zyczkowski_volume_1998 K. Życzkowski, P. Horodecki, A. Sanpera, M. Lewenstein, Volume of the set of separable states, Phys. Rev. A 58 (1998) 883–892. https://doi.org/10.1103/PhysRevA.58.883 doi:10.1103/PhysRevA.58.883. zyczkowski_volume_1999 K.  ŻŻyczkowski, Volume of the set of separable states. ii, Phys. Rev. A 60 (1999) 3496–3507. https://doi.org/10.1103/PhysRevA.60.3496 doi:10.1103/PhysRevA.60.3496. vidal_computable_2002 G. Vidal, R. F. Werner, Computable measure of entanglement, Phys. Rev. A 65 (2002) 032314. https://doi.org/10.1103/PhysRevA.65.032314 doi:10.1103/PhysRevA.65.032314. plenio_logarithmic_2005 M. B. Plenio, Logarithmic negativity: A full entanglement monotone that is not convex, Phys. Rev. Lett. 95 (2005) 090503. https://doi.org/10.1103/PhysRevLett.95.090503 doi:10.1103/PhysRevLett.95.090503. Lee_partial_2000 J. Lee, M. S. Kim, Y. J. Park, S. Lee, Partial teleportation of entanglement in a noisy environment, Journal of Modern Optics 47 (12) (2000) 2151–2164. https://doi.org/10.1080/09500340008235138 doi:10.1080/09500340008235138. zhou_single-copies_2020 Y. Zhou, P. Zeng, Z. Liu, Single-Copies Estimation of Entanglement Negativity, Phys. Rev. Lett. 125 (20) (2020) 200502. https://doi.org/10.1103/PhysRevLett.125.200502 doi:10.1103/PhysRevLett.125.200502. elben_mixed-state_2020 A. Elben, R. Kueng, H.-Y. R. Huang, R. van Bijnen, C. Kokail, M. Dalmonte, P. Calabrese, B. Kraus, J. Preskill, P. Zoller, B. Vermersch, Mixed-State Entanglement from Local Randomized Measurements, Phys. Rev. Lett. 125 (20) (2020) 200501. https://doi.org/10.1103/PhysRevLett.125.200501 doi:10.1103/PhysRevLett.125.200501. roman_advanced_2005 S. Roman, S. Axler, F. Gehring, Advanced linear algebra, Vol. 3, Springer, 2005. curty_entanglement_2004 M. Curty, M. Lewenstein, N. Lütkenhaus, Entanglement as a precondition for secure quantum key distribution, Phys. Rev. Lett. 92 (2004) 217903. https://doi.org/10.1103/PhysRevLett.92.217903 doi:10.1103/PhysRevLett.92.217903. gray_machine-learning-assisted_2018 J. Gray, L. Banchi, A. Bayat, S. Bose, Machine-Learning-Assisted Many-Body Entanglement Measurement, Phys. Rev. Lett. 121 (15) (2018) 150503. https://doi.org/10.1103/PhysRevLett.121.150503 doi:10.1103/PhysRevLett.121.150503. brydges_probing_2019 T. Brydges, A. Elben, P. Jurcevic, B. Vermersch, C. Maier, B. P. Lanyon, P. Zoller, R. Blatt, C. F. Roos, Probing Rényi entanglement entropy via randomized measurements, Science 364 (6437) (2019) 260–263. https://doi.org/10.1126/science.aau4963 doi:10.1126/science.aau4963. yu_optimal_2021 X.-D. Yu, S. Imai, O. Gühne, Optimal Entanglement Certification from Moments of the Partial Transpose, Phys. Rev. Lett. 127 (2021) 060504. https://doi.org/10.1103/PhysRevLett.127.060504 doi:10.1103/PhysRevLett.127.060504. neven_symmetry-resolved_2021 A. Neven, J. Carrasco, V. Vitale, C. Kokail, A. Elben, M. Dalmonte, P. Calabrese, P. Zoller, B. Vermersch, R. Kueng, B. Kraus, Symmetry-resolved entanglement detection using partial transpose moments, npj Quantum Information 7 (1) (2021) 152. https://doi.org/10.1038/s41534-021-00487-y doi:10.1038/s41534-021-00487-y. bell_einstein_1964 J. S. Bell, On the Einstein Podolsky Rosen paradox, Physics Physique Fizika 1 (3) (1964) 195–200. https://doi.org/10.1103/PhysicsPhysiqueFizika.1.195 doi:10.1103/PhysicsPhysiqueFizika.1.195. bell_theory_1976 J. S. Bell, The Theory of Local Beables, Epistemological Lett. 9 (1976) 11–24. wiseman_twobell_2014 H. M. Wiseman, The two Bell's theorems of John Bell, J. Phys. A 47 (42) (2014) 424001. https://doi.org/10.1088/1751-8113/47/42/424001 doi:10.1088/1751-8113/47/42/424001. acin_device-independent_2007 A. Acín, N. Brunner, N. Gisin, S. Massar, S. Pironio, V. Scarani, Device-Independent Security of Quantum Cryptography against Collective Attacks, Phys. Rev. Lett. 98 (23) (June 2007). https://doi.org/10.1103/physrevlett.98.230501 doi:10.1103/physrevlett.98.230501. acin_certified_2016 A. Acín, L. Masanes, Certified randomness in quantum physics, Nature 540 (7632) (2016) 213–219. https://doi.org/10.1038/nature20119 doi:10.1038/nature20119. buhrman_nonlocality_2010 H. Buhrman, R. Cleve, S. Massar, R. d. Wolf, Nonlocality and communication complexity, Reviews of Modern Physics 82 (1) (2010) 665–698. https://doi.org/10.1103/revmodphys.82.665 doi:10.1103/revmodphys.82.665. brunner_bell_2014 N. Brunner, D. Cavalcanti, S. Pironio, V. Scarani, S. Wehner, Bell nonlocality, Reviews of Modern Physics 86 (2) (2014) 419–478. https://doi.org/10.1103/RevModPhys.86.419 doi:10.1103/RevModPhys.86.419. scarani_bell_2019 V. Scarani, Bell nonlocality, Oxford University Press, 2019. clauser_proposed_1969 J. F. Clauser, M. A. Horne, A. Shimony, R. A. Holt, Proposed Experiment to Test Local Hidden-Variable Theories, Phys. Rev. Lett. 23 (15) (1969) 880–884. https://doi.org/10.1103/PhysRevLett.23.880 doi:10.1103/PhysRevLett.23.880. fine_hidden_1982 A. Fine, Hidden Variables, Joint Probability, and the Bell Inequalities, Phys. Rev. Lett. 48 (5) (1982) 291–295. https://doi.org/10.1103/physrevlett.48.291 doi:10.1103/physrevlett.48.291. hensen_loophole_2015 B. Hensen, H. Bernien, A. E. Dréau, A. Reiserer, N. Kalb, M. S. Blok, J. Ruitenberg, R. F. L. Vermeulen, R. N. Schouten, C. Abellán, W. Amaya, V. Pruneri, M. W. Mitchell, M. Markham, D. J. Twitchen, D. Elkouss, S. Wehner, T. H. Taminiau, R. Hanson, Loophole-free Bell inequality violation using electron spins separated by 1.3 kilometres, Nature 526 (7575) (2015) 682–686. https://doi.org/10.1038/nature15759 doi:10.1038/nature15759. giustina_significant_2015 M. Giustina, M. A. Versteegh, S. Wengerowsky, J. Handsteiner, A. Hochrainer, K. Phelan, F. Steinlechner, J. Kofler, J.-Å. Larsson, C. Abellán, W. Amaya, V. Pruneri, M. W. Mitchell, J. Beyer, T. Gerrits, A. E. Lita, L. K. Shalm, S. W. Nam, T. Scheidl, R. Ursin, B. Wittmann, A. Zeilinger, Significant-Loophole-Free Test of Bell's Theorem with Entangled Photons, Phys. Rev. Lett. 115 (25) (December 2015). https://doi.org/10.1103/physrevlett.115.250401 doi:10.1103/physrevlett.115.250401. shalm_strong_2015 L. K. Shalm, E. Meyer-Scott, B. G. Christensen, P. Bierhorst, M. A. Wayne, M. J. Stevens, T. Gerrits, S. Glancy, D. R. Hamel, M. S. Allman, K. J. Coakley, S. D. Dyer, C. Hodge, A. E. Lita, V. B. Verma, C. Lambrocco, E. Tortorici, A. L. Migdall, Y. Zhang, D. R. Kumor, W. H. Farr, F. Marsili, M. D. Shaw, J. A. Stern, C. Abellán, W. Amaya, V. Pruneri, T. Jennewein, M. W. Mitchell, P. G. Kwiat, J. C. Bienfang, R. P. Mirin, E. Knill, S. W. Nam, Strong loophole-free test of local realism, Phys. Rev. Lett. 115 (25) (December 2015). https://doi.org/10.1103/physrevlett.115.250402 doi:10.1103/physrevlett.115.250402. rosenfeld_event_2017 W. Rosenfeld, D. Burchardt, R. Garthoff, K. Redeker, N. Ortegel, M. Rau, H. Weinfurter, Event-Ready Bell Test Using Entangled Atoms Simultaneously Closing Detection and Locality Loopholes, Phys. Rev. Lett. 119 (1) (July 2017). https://doi.org/10.1103/physrevlett.119.010402 doi:10.1103/physrevlett.119.010402. pitovsky_quantum_1989 I. Pitovsky, Quantum Probability – Quantum Logic, Springer, Berlin, 1989. werner_bell_2001 R. F. Werner, M. M. Wolf, Bell inequalities and entanglement, Quantum Information & Computation 1 (3) (2001) 1–25. zukowski_all_2002 M.  ŻŻukowski, C. Brukner, W. Laskowski, M. Wie śśniak, Do All Pure Entangled States Violate Bell's Inequalities for Correlation Functions?, Phys. Rev. Lett. 88 (2002) 210402. https://doi.org/10.1103/PhysRevLett.88.210402 doi:10.1103/PhysRevLett.88.210402. sliwa_symmetries_2003 C. Śliwa, Symmetries of the Bell correlation inequalities, Phys. Lett. A 317 (3) (2003) 165–168. https://doi.org/10.1016/S0375-9601(03)01115-0 doi:10.1016/S0375-9601(03)01115-0. bancal_looking_2010 J.-D. Bancal, N. Gisin, S. Pironio, Looking for symmetric Bell inequalities, J. Phys. A 43 (38) (2010) 385303. https://doi.org/10.1088/1751-8113/43/38/385303 doi:10.1088/1751-8113/43/38/385303. pironio_all_2014 S. Pironio, All clauser–horne–shimony–holt polytopes, J. Phys. A 47 (42) (2014) 424020. https://doi.org/10.1088/1751-8113/47/42/424020 doi:10.1088/1751-8113/47/42/424020. deza_enumeration_2015 M. Deza, M. D. Sikirić, Enumeration of the facets of cut polytopes over some highly symmetric graphs, International Transactions in Operational Research 23 (5) (2015) 853–860. https://doi.org/10.1111/itor.12194 doi:10.1111/itor.12194. horodecki_violating_1995 R. Horodecki, P. Horodecki, M. Horodecki, Violating Bell inequality by mixed spin-1/2 states: necessary and sufficient condition, Phys. Lett. A 200 (5) (1995) 340–344. https://doi.org/10.1016/0375-9601(95)00214-n doi:10.1016/0375-9601(95)00214-n. zukowski_bells_2002 M. Żukowski, C. Brukner, Bell's Theorem for General N-Qubit States, Phys. Rev. Lett. 88 (21) (2002) 210401. https://doi.org/10.1103/PhysRevLett.88.210401 doi:10.1103/PhysRevLett.88.210401. guhne_entanglement_2009 O. Gühne, G. Tóth, Entanglement detection, Physics Reports 474 (1-6) (2009) 1–75. https://doi.org/10.1016/j.physrep.2009.02.004 doi:10.1016/j.physrep.2009.02.004. horodecki_quantum_2009 R. Horodecki, P. Horodecki, M. Horodecki, K. Horodecki, Quantum entanglement, Reviews of Modern Physics 81 (2) (2009) 865–942. https://doi.org/10.1103/revmodphys.81.865 doi:10.1103/revmodphys.81.865. friis_entanglement_2018 N. Friis, G. Vitagliano, M. Malik, M. Huber, Entanglement certification from theory to experiment, Nature Reviews Physics 1 (1) (2018) 72–87. https://doi.org/10.1038/s42254-018-0003-5 doi:10.1038/s42254-018-0003-5. plenio_introduction_2007 M. B. Plenio, S. Virmani, An Introduction to Entanglement Measures, Quantum Info. Comput. 7 (1) (2007) 1–51, place: Paramus, NJ, Publisher: Rinton Press, Incorporated. eltschka_quantifying_2014 C. Eltschka, J. Siewert, Quantifying entanglement resources, J. Phys. A 47 (42) (2014) 424005. https://doi.org/10.1088/1751-8113/47/42/424005 doi:10.1088/1751-8113/47/42/424005. vicente_further_2008 J. I. d. Vicente, Further results on entanglement detection and quantification from the correlation matrix criterion, J. Phys. A 41 (6) (2008) 065309. https://doi.org/10.1088/1751-8113/41/6/065309 doi:10.1088/1751-8113/41/6/065309. noauthor_lectures_2006 D. Bruß, G. Leuchs (Eds.), Frontmatter, John Wiley & Sons, Ltd, 2006, pp. 1–24. https://doi.org/10.1002/9783527618637.fmatter doi:10.1002/9783527618637.fmatter. dur_three_2000 W. Dür, G. Vidal, J. I. Cirac, Three qubits can be entangled in two inequivalent ways, Phys. Rev. A 62 (6) (2000) 062314. https://doi.org/10.1103/PhysRevA.62.062314 doi:10.1103/PhysRevA.62.062314. toth_detection_2007 G. Tóth, Detection of multipartite entanglement in the vicinity of symmetric Dicke states, Journal of the Optical Society of America B 24 (2) (2007) 275. https://doi.org/10.1364/josab.24.000275 doi:10.1364/josab.24.000275. briegel_persistent_2001 H. J. Briegel, R. Raussendorf, Persistent entanglement in arrays of interacting particles, Phys. Rev. Lett. 86 (2001) 910–913. https://doi.org/10.1103/PhysRevLett.86.910 doi:10.1103/PhysRevLett.86.910. hein_multiparty_2004 M. Hein, J. Eisert, H. J. Briegel, Multiparty entanglement in graph states, Phys. Rev. A 69 (2004) 062311. https://doi.org/10.1103/PhysRevA.69.062311 doi:10.1103/PhysRevA.69.062311. helwig_absolute_2012 W. Helwig, W. Cui, J. I. Latorre, A. Riera, H.-K. Lo, Absolute maximal entanglement and quantum secret sharing, Phys. Rev. A 86 (2012) 052335. https://doi.org/10.1103/PhysRevA.86.052335 doi:10.1103/PhysRevA.86.052335. huber_absolutely_2017 F. Huber, O. Gühne, J. Siewert, Absolutely maximally entangled states of seven qubits do not exist, Phys. Rev. Lett. 118 (2017) 200502. https://doi.org/10.1103/PhysRevLett.118.200502 doi:10.1103/PhysRevLett.118.200502. Klobus_k-uniform_2019 W. Kłobus, A. Burchardt, A. Kołodziejski, M. Pandit, T. Vértesi, K.  ŻŻyczkowski, W. Laskowski, k-uniform mixed states, Phys. Rev. A 100 (2019) 032112. https://doi.org/10.1103/PhysRevA.100.032112 doi:10.1103/PhysRevA.100.032112. gharibian_strong_2010 S. Gharibian, Strong NP-hardness of the quantum separability problem, Quantum Information & Computation 10 (3 & 4) (2010) 343–360. depillis_linear_1967 J. de Pillis, Linear transformations which preserve hermitian and positive semidefinite operators, Pacific Journal of Mathematics 23 (1) (1967) 129–137. choi_completely_1975 M.-D. Choi, Completely positive linear maps on complex matrices, Linear Algebra and its Applications 10 (3) (1975) 285–290. https://doi.org/10.1016/0024-3795(75)90075-0 doi:10.1016/0024-3795(75)90075-0. jamiolkowski_linear_1972 A. Jamiołkowski, Linear transformations which preserve trace and positive semidefiniteness of operators, Reports on Mathematical Physics 3 (4) (1972) 275–278. https://doi.org/10.1016/0034-4877(72)90011-0 doi:10.1016/0034-4877(72)90011-0. lewenstein_optimization_2000 M. Lewenstein, B. Kraus, J. I. Cirac, P. Horodecki, Optimization of entanglement witnesses, Phys. Rev. A 62 (2000) 052310. https://doi.org/10.1103/PhysRevA.62.052310 doi:10.1103/PhysRevA.62.052310. bruss_reflections_2002 D. Bruß, J. I. Cirac, P. Horodecki, F. Hulpke, B. Kraus, M. Lewenstein, A. Sanpera, Reflections upon separability and distillability, Journal of Modern Optics 49 (8) (2002) 1399–1418. https://doi.org/10.1080/09500340110105975 doi:10.1080/09500340110105975. acin_from_2006 A. Acín, N. Gisin, L. Masanes, From Bell's Theorem to Secure Quantum Key Distribution, Phys. Rev. Lett. 97 (2006) 120405. https://doi.org/10.1103/PhysRevLett.97.120405 doi:10.1103/PhysRevLett.97.120405. bancal_device-independent_2011 J.-D. Bancal, N. Gisin, Y.-C. Liang, S. Pironio, Device-independent witnesses of genuine multipartite entanglement, Phys. Rev. Lett. 106 (2011) 250404. https://doi.org/10.1103/PhysRevLett.106.250404 doi:10.1103/PhysRevLett.106.250404. pal_device_2014 K. F. Pál, T. Vértesi, M. Navascués, Device-independent tomography of multipartite quantum states, Phys. Rev. A 90 (2014) 042340. https://doi.org/10.1103/PhysRevA.90.042340 doi:10.1103/PhysRevA.90.042340. sorensen_many_2001 A. Sørensen, L.-M. Duan, J. I. Cirac, P. Zoller, Many-particle entanglement with Bose–Einstein condensates, Nature 409 (6816) (2001) 63–66. https://doi.org/10.1038/35051038 doi:10.1038/35051038. toth_optimal_2007 G. Tóth, C. Knapp, O. Gühne, H. J. Briegel, Optimal spin squeezing inequalities detect bound entanglement in spin models, Phys. Rev. Lett. 99 (2007) 250405. https://doi.org/10.1103/PhysRevLett.99.250405 doi:10.1103/PhysRevLett.99.250405. ma_quantum_2011 J. Ma, X. Wang, C. Sun, F. Nori, Quantum spin squeezing, Physics Reports 509 (2) (2011) 89–165. https://doi.org/10.1016/j.physrep.2011.08.003 doi:10.1016/j.physrep.2011.08.003. cerf_negative_1997 N. J. Cerf, C. Adami, Negative entropy and information in quantum mechanics, Phys. Rev. Lett. 79 (1997) 5194–5197. https://doi.org/10.1103/PhysRevLett.79.5194 doi:10.1103/PhysRevLett.79.5194. horodecki_partial_2005 M. Horodecki, J. Oppenheim, A. Winter, Partial quantum information, Nature 436 (7051) (2005) 673–676. coffman_distributed_2000 V. Coffman, J. Kundu, W. K. Wootters, Distributed entanglement, Phys. Rev. A 61 (2000) 052306. https://doi.org/10.1103/PhysRevA.61.052306 doi:10.1103/PhysRevA.61.052306. osborne_general_2006 T. J. Osborne, F. Verstraete, General monogamy inequality for bipartite qubit entanglement, Phys. Rev. Lett. 96 (2006) 220503. https://doi.org/10.1103/PhysRevLett.96.220503 doi:10.1103/PhysRevLett.96.220503. horodecki_mixedstate_1998 M. Horodecki, P. Horodecki, R. Horodecki, Mixed-state entanglement and distillation: Is there a “bound” entanglement in nature?, Phys. Rev. Lett. 80 (1998) 5239–5242. https://doi.org/10.1103/PhysRevLett.80.5239 doi:10.1103/PhysRevLett.80.5239. divincenzo_evidence_2000 D. P. DiVincenzo, P. W. Shor, J. A. Smolin, B. M. Terhal, A. V. Thapliyal, Evidence for bound entangled states with negative partial transpose, Phys. Rev. A 61 (2000) 062312. https://doi.org/10.1103/PhysRevA.61.062312 doi:10.1103/PhysRevA.61.062312. bennett_teleporting_1993 C. H. Bennett, G. Brassard, C. Crépeau, R. Jozsa, A. Peres, W. K. Wootters, Teleporting an unknown quantum state via dual classical and einstein-podolsky-rosen channels, Phys. Rev. Lett. 70 (1993) 1895–1899. https://doi.org/10.1103/PhysRevLett.70.1895 doi:10.1103/PhysRevLett.70.1895. horodecki_quantum_2006 M. Horodecki, J. Oppenheim, A. Winter, Quantum State Merging and Negative Information, Communications in Mathematical Physics 269 (1) (2006) 107–136. https://doi.org/10.1007/s00220-006-0118-x doi:10.1007/s00220-006-0118-x. pezze_entanglement_2009 L. Pezzé, A. Smerzi, Entanglement, Nonlinear Dynamics, and the Heisenberg Limit, Phys. Rev. Lett. 102 (2009) 100401. https://doi.org/10.1103/PhysRevLett.102.100401 doi:10.1103/PhysRevLett.102.100401. toth_quantum_2014 G. Tóth, I. Apellaniz, Quantum metrology from a quantum information science perspective, J. Phys. A 47 (42) (2014) 424006. https://doi.org/10.1088/1751-8113/47/42/424006 doi:10.1088/1751-8113/47/42/424006. pezze_quantum_2018 L. Pezzè, A. Smerzi, M. K. Oberthaler, R. Schmied, P. Treutlein, Quantum metrology with nonclassical states of atomic ensembles, Reviews of Modern Physics 90 (2018) 035005. https://doi.org/10.1103/RevModPhys.90.035005 doi:10.1103/RevModPhys.90.035005. dur_separability_1999 W. Dür, J. I. Cirac, R. Tarrach, Separability and Distillability of Multiparticle Quantum Systems, Phys. Rev. Lett. 83 (17) (1999) 3562–3565. https://doi.org/10.1103/PhysRevLett.83.3562 doi:10.1103/PhysRevLett.83.3562. dur_classification_2000 W. Dür, J. I. Cirac, Classification of multiqubit mixed states: Separability and distillability properties, Phys. Rev. A 61 (4) (March 2000). https://doi.org/10.1103/physreva.61.042314 doi:10.1103/physreva.61.042314. guhne_multipartite_2005 O. Gühne, G. Tóth, H. J. Briegel, Multipartite entanglement in spin chains, New J. Phys. 7 (2005) 229–229. https://doi.org/10.1088/1367-2630/7/1/229 doi:10.1088/1367-2630/7/1/229. guhne_energy_2006 O. Gühne, G. Tóth, Energy and multipartite entanglement in multidimensional and frustrated spin models, Phys. Rev. A 73 (5) (May 2006). https://doi.org/10.1103/physreva.73.052319 doi:10.1103/physreva.73.052319. hyllus_fisher_2012 P. Hyllus, W. Laskowski, R. Krischek, C. Schwemmer, W. Wieczorek, H. Weinfurter, L. Pezzé, A. Smerzi, Fisher information and multiparticle entanglement, Phys. Rev. A 85 (2) (February 2012). https://doi.org/10.1103/physreva.85.022321 doi:10.1103/physreva.85.022321. sorensen_entanglement_2001 A. S. Sørensen, K. Mølmer, Entanglement and Extreme Spin Squeezing, Phys. Rev. Lett. 86 (20) (2001) 4431–4434. https://doi.org/10.1103/physrevlett.86.4431 doi:10.1103/physrevlett.86.4431. szalay_kstretchability_2019 S. Szalay, k-stretchability of entanglement, and the duality of k-separability and k-producibility, Quantum 3 (2019) 204. https://doi.org/10.22331/q-2019-12-02-204 doi:10.22331/q-2019-12-02-204. toth_stretching_2020 G. Tóth, Stretching the limits of multiparticle entanglement, Quantum Views 4 (2020) 30. https://doi.org/10.22331/qv-2020-01-27-30 doi:10.22331/qv-2020-01-27-30. ren_metrological_2021 Z. Ren, W. Li, A. Smerzi, M. Gessner, Metrological detection of multipartite entanglement from young diagrams, Phys. Rev. Lett. 126 (2021) 080502. https://doi.org/10.1103/PhysRevLett.126.080502 doi:10.1103/PhysRevLett.126.080502. eisert_schmidt_2001 J. Eisert, H. J. Briegel, Schmidt measure as a tool for quantifying multiparticle entanglement, Phys. Rev. A 64 (2001) 022306. https://doi.org/10.1103/PhysRevA.64.022306 doi:10.1103/PhysRevA.64.022306. spengler_examining_2013 C. Spengler, M. Huber, A. Gabriel, B. C. Hiesmayr, Examining the dimensionality of genuine multipartite entanglement, Quantum information processing 12 (2013) 269–278. huber_structure_2013 M. Huber, J. I. de Vicente, Structure of multidimensional entanglement in multipartite systems, Phys. Rev. Lett. 110 (2013) 030501. https://doi.org/10.1103/PhysRevLett.110.030501 doi:10.1103/PhysRevLett.110.030501. kraft_characterizing_2018 T. Kraft, C. Ritz, N. Brunner, M. Huber, O. Gühne, Characterizing genuine multilevel entanglement, Phys. Rev. Lett. 120 (2018) 060502. https://doi.org/10.1103/PhysRevLett.120.060502 doi:10.1103/PhysRevLett.120.060502. navascues_genuine_2020 M. Navascués, E. Wolfe, D. Rosset, A. Pozas-Kerstjens, Genuine network multipartite entanglement, Phys. Rev. Lett. 125 (2020) 240505. https://doi.org/10.1103/PhysRevLett.125.240505 doi:10.1103/PhysRevLett.125.240505. tavakoli_bell_2022 A. Tavakoli, A. Pozas-Kerstjens, M.-X. Luo, M.-O. Renou, Bell nonlocality in networks, Reports on Progress in Physics 85 (5) (2022) 056001. https://doi.org/10.1088/1361-6633/ac41bb doi:10.1088/1361-6633/ac41bb. hansenne_symmetries_2022 K. Hansenne, Z.-P. Xu, T. Kraft, O. Guehne, Symmetries in quantum networks lead to no-go theorems for entanglement distribution and to verification techniques, Nature Communications 13 (1) (jan 2022). https://doi.org/10.1038/s41467-022-28006-3 doi:10.1038/s41467-022-28006-3. collins_integration_2006 B. Collins, P. Śniady, Integration with Respect to the Haar Measure on Unitary, Orthogonal and Symplectic Group, Communications in Mathematical Physics 264 (3) (2006) 773–795. https://doi.org/10.1007/s00220-006-1554-3 doi:10.1007/s00220-006-1554-3. puchala_symbolic_2017 Z. Puchała, J. A. Miszczak, Symbolic integration with respect to the Haar measure on the unitary groups, Bulletin of the Polish Academy of Sciences Technical Sciences 65 (1) (2017) 21–27. https://doi.org/10.1515/bpasts-2017-0003 doi:10.1515/bpasts-2017-0003. spengler_composite_2012 C. Spengler, M. Huber, B. C. Hiesmayr, Composite parameterization and Haar measure for all unitary and special unitary groups, Journal of Mathematical Physics 53 (1) (2012) 013501. https://doi.org/10.1063/1.3672064 doi:10.1063/1.3672064. zhang_matrix_2015 L. Zhang, Matrix integrals over unitary groups: An application of Schur-Weyl duality (2015). http://arxiv.org/abs/1408.3782 arXiv:1408.3782. collins_weingarten_2021 B. Collins, S. Matsumoto, J. Novak, The weingarten calculus (2021). http://arxiv.org/abs/2109.14890 arXiv:2109.14890. tilma_generalized_2002 T. Tilma, E. C. G. Sudarshan, Generalized Euler angle parametrization for SU(N), J. Phys. A 35 (48) (2002) 10467. https://doi.org/10.1088/0305-4470/35/48/316 doi:10.1088/0305-4470/35/48/316. sakurai_modern_1995 J. J. Sakurai, E. D. Commins, Modern quantum mechanics, revised edition (1995). horodecki_quantum_2008 K. Horodecki, M. Horodecki, P. Horodecki, D. Leung, J. Oppenheim, Quantum key distribution based on private states: Unconditional security over untrusted channels with zero quantum capacity, IEEE Transactions on Information Theory 54 (6) (2008) 2604–2620. https://doi.org/10.1109/TIT.2008.921870 doi:10.1109/TIT.2008.921870. wyderka_complete_2022 N. Wyderka, A. Ketterer, S. Imai, J. L. Bönsel, D. E. Jones, B. T. Kirby, X.-D. Yu, O. Gühne, Complete characterization of quantum correlations by randomized measurements (2022). https://doi.org/10.48550/ARXIV.2212.07894 doi:10.48550/ARXIV.2212.07894. imai_bound_2021 S. Imai, N. Wyderka, A. Ketterer, O. Gühne, Bound Entanglement from Randomized Measurements, Phys. Rev. Lett. 126 (15) (2021) 150501. https://doi.org/10.1103/PhysRevLett.126.150501 doi:10.1103/PhysRevLett.126.150501. wyderka_probing_2023 N. Wyderka, A. Ketterer, Probing the geometry of correlation matrices with randomized measurements, PRX Quantum 4 (2023) 020325. https://doi.org/10.1103/PRXQuantum.4.020325 doi:10.1103/PRXQuantum.4.020325. bannai_survey_2009 E. Bannai, E. Bannai, A survey on spherical designs and algebraic combinatorics on spheres, European Journal of Combinatorics 30 (6) (2009) 1392–1425. https://doi.org/10.1016/j.ejc.2008.11.007 doi:10.1016/j.ejc.2008.11.007. hardin_mclaren_1996 R. H. Hardin, N. J. A. Sloane, McLaren's Improved Snub Cube and Other New Spherical Designs in Three Dimensions (2002). http://arxiv.org/abs/math/0207211 arXiv:math/0207211. renes_symmetric_2004 J. M. Renes, R. Blume-Kohout, A. J. Scott, C. M. Caves, Symmetric informationally complete quantum measurements, Journal of Mathematical Physics 45 (6) (2004) 2171–2180. https://doi.org/10.1063/1.1737053 doi:10.1063/1.1737053. ambainis_quantum_2007 A. Ambainis, J. Emerson, Quantum t-designs: t-wise independence in the quantum world (2007). http://arxiv.org/abs/quant-ph/0701126 arXiv:quant-ph/0701126. bengtsson_geometry_2017 I. Bengtsson, K. Życzkowski, Geometry of Quantum States: An Introduction to Quantum Entanglement, 2nd Edition, Cambridge University Press, 2017. https://doi.org/10.1017/9781139207010 doi:10.1017/9781139207010. wotters_statistical_1981 W. K. Wootters, Statistical distance and Hilbert space, Phys. Rev. D 23 (1981) 357–362. https://doi.org/10.1103/PhysRevD.23.357 doi:10.1103/PhysRevD.23.357. barenco_stabilization_1997 A. Barenco, A. Berthiaume, D. Deutsch, A. Ekert, R. Jozsa, C. Macchiavello, Stabilization of quantum computations by symmetrization, SIAM Journal on Computing 26 (5) (1997) 1541–1557. https://doi.org/10.1137/S0097539796302452 doi:10.1137/S0097539796302452. harrow_church_2013 A. W. Harrow, The church of the symmetric subspace (2013). http://arxiv.org/abs/1308.6595 arXiv:1308.6595. brandao_mathematics_2016 F. G. S. L. Brandao, M. Christandl, A. W. Harrow, M. Walter, The mathematics of entanglement (2016). http://arxiv.org/abs/1604.01790 arXiv:1604.01790. low_pseudo_2010 R. A. Low, Pseudo-randomness and learning in quantum computation (2010). http://arxiv.org/abs/1006.5227 arXiv:1006.5227. ketterer_entropic_2020 A. Ketterer, O. Gühne, Entropic uncertainty relations from quantum designs, Phys. Rev. Res. 2 (2) (2020) 023130. https://doi.org/10.1103/PhysRevResearch.2.023130 doi:10.1103/PhysRevResearch.2.023130. welch_lower_1974 L. Welch, Lower bounds on the maximum cross correlation of signals (corresp.), IEEE Transactions on Information Theory 20 (3) (1974) 397–399. https://doi.org/10.1109/TIT.1974.1055219 doi:10.1109/TIT.1974.1055219. klappenecker_mutually_2005 A. Klappenecker, M. Roetteler, Mutually unbiased bases are complex projective 2-designs (2005). http://arxiv.org/abs/quant-ph/0502031 arXiv:quant-ph/0502031. durt_mutually_2010 T. Durt, B.-G. Englert, I. Bengtsson, K. Życzkowski, On Mutually Unbiased Bases, International Journal of Quantum Information 08 (04) (2010) 535–640. https://doi.org/10.1142/S0219749910006502 doi:10.1142/S0219749910006502. horodecki_five_2020 P. Horodecki, L. Rudnicki, K.  ŻŻyczkowski, Five open problems in quantum information theory, PRX Quantum 3 (2022) 010101. https://doi.org/10.1103/PRXQuantum.3.010101 doi:10.1103/PRXQuantum.3.010101. weiner_gap_2013 M. Weiner, A gap for the maximum number of mutually unbiased bases (2010). https://doi.org/10.48550/arXiv.0902.0635 doi:10.48550/arXiv.0902.0635. ivonovic_geometrical_1981 I. D. Ivonovic, Geometrical description of quantal state determination, J. Phys. A 14 (12) (1981) 3241. https://doi.org/10.1088/0305-4470/14/12/019 doi:10.1088/0305-4470/14/12/019. noauthor_optimal_1989 W. K. Wootters, B. D. Fields, Optimal state-determination by mutually unbiased measurements, Annals of Physics 191 (2) (1989) 363–381. https://doi.org/10.1016/0003-4916(89)90322-9 doi:10.1016/0003-4916(89)90322-9. wiesniak_entanglement_2011 M. Wieśniak, T. Paterek, A. Zeilinger, Entanglement in mutually unbiased bases, New J. Phys. 13 (5) (2011) 053047. https://doi.org/10.1088/1367-2630/13/5/053047 doi:10.1088/1367-2630/13/5/053047. seyfarth_construction_2011 U. Seyfarth, K. S. Ranade, Construction of mutually unbiased bases with cyclic symmetry for qubit systems, Phys. Rev. A 84 (2011) 042327. https://doi.org/10.1103/PhysRevA.84.042327 doi:10.1103/PhysRevA.84.042327. zauner_grundzuge_1999 G. Zauner, https://www.mat.univie.ac.at/ neum/ms/zauner.pdfGrundzuege einer nichtkommutativen Designtheorie, Ph. D. dissertation, PhD thesis (1999). <https://www.mat.univie.ac.at/ neum/ms/zauner.pdf> gross_evenly_2007 D. Gross, K. Audenaert, J. Eisert, Evenly distributed unitaries: On the structure of unitary designs, Journal of Mathematical Physics 48 (5) (05 2007). https://doi.org/10.1063/1.2716992 doi:10.1063/1.2716992. dankert_exact_2009 C. Dankert, R. Cleve, J. Emerson, E. Livine, Exact and approximate unitary 2-designs and their application to fidelity estimation, Phys. Rev. A 80 (2009) 012304. https://doi.org/10.1103/PhysRevA.80.012304 doi:10.1103/PhysRevA.80.012304. scott_optimizing_2008 A. J. Scott, Optimizing quantum process tomography with unitary 2-designs, J. Phys. A 41 (5) (2008) 055308. https://doi.org/10.1088/1751-8113/41/5/055308 doi:10.1088/1751-8113/41/5/055308. roberts_chaos_2017 D. A. Roberts, B. Yoshida, Chaos and complexity by design, Journal of High Energy Physics 2017 (4) (apr 2017). https://doi.org/10.1007/jhep04(2017)121 doi:10.1007/jhep04(2017)121. kostenberger_weingarten_2021 G. Köstenberger, Weingarten calculus (2021). http://arxiv.org/abs/2101.00921 arXiv:2101.00921. vollbrecht_entanglement_2001 K. G. H. Vollbrecht, R. F. Werner, Entanglement measures under symmetry, Phys. Rev. A 64 (2001) 062307. https://doi.org/10.1103/PhysRevA.64.062307 doi:10.1103/PhysRevA.64.062307. eggeling_separability_2001 T. Eggeling, R. F. Werner, Separability properties of tripartite states with uuu symmetry, Phys. Rev. A 63 (2001) 042111. https://doi.org/10.1103/PhysRevA.63.042111 doi:10.1103/PhysRevA.63.042111. werner_quantum_1989 R. F. Werner, Quantum states with Einstein-Podolsky-Rosen correlations admitting a hidden-variable model, Phys. Rev. A 40 (1989) 4277–4281. https://doi.org/10.1103/PhysRevA.40.4277 doi:10.1103/PhysRevA.40.4277. horodecki_method_2002 P. Horodecki, A. Ekert, Method for direct detection of quantum entanglement, Phys. Rev. Lett. 89 (2002) 127902. https://doi.org/10.1103/PhysRevLett.89.127902 doi:10.1103/PhysRevLett.89.127902. ekert_direct_2002 A. K. Ekert, C. M. Alves, D. K. L. Oi, M. Horodecki, P. Horodecki, L. C. Kwek, Direct estimations of linear and nonlinear functionals of a quantum state, Phys. Rev. Lett. 88 (2002) 217901. https://doi.org/10.1103/PhysRevLett.88.217901 doi:10.1103/PhysRevLett.88.217901. harrow_random_2009 A. W. Harrow, R. A. Low, Random Quantum Circuits are Approximate 2-designs, Communications in Mathematical Physics 291 (1) (2009) 257–302. https://doi.org/10.1007/s00220-009-0873-6 doi:10.1007/s00220-009-0873-6. huber_positive_2021 F. Huber, Positive maps and trace polynomials from the symmetric group, Journal of Mathematical Physics 62 (2) (02 2021). https://doi.org/10.1063/5.0028856 doi:10.1063/5.0028856. huber_refuting_2023 F. Huber, N. Wyderka, Refuting spectral compatibility of quantum marginals (2023). http://arxiv.org/abs/2211.06349 arXiv:2211.06349. rico_entanglement_2023 A. Rico, F. Huber, Entanglement detection with trace polynomials (2023). http://arxiv.org/abs/2303.07761 arXiv:2303.07761. garcia_quantum_2021 R. J. Garcia, Y. Zhou, A. Jaffe, Quantum scrambling with classical shadows, Phys. Rev. Res. 3 (3) (August 2021). https://doi.org/10.1103/physrevresearch.3.033155 doi:10.1103/physrevresearch.3.033155. brandao_models_2021 F. G. Brandão, W. Chemissany, N. Hunter-Jones, R. Kueng, J. Preskill, Models of quantum complexity growth, PRX Quantum 2 (2021) 030316. https://doi.org/10.1103/PRXQuantum.2.030316 doi:10.1103/PRXQuantum.2.030316. hunter_chaos_2018 N. Hunter-Jones, J. Liu, Chaos and random matrices in supersymmetric SYK, Journal of High Energy Physics 2018 (5) (2018) 1–26. roy_unitary_2009 A. Roy, A. J. Scott, Unitary designs and codes, Designs, Codes and Cryptography 53 (1) (2009) 13–31. https://doi.org/10.1007/s10623-009-9290-2 doi:10.1007/s10623-009-9290-2. webb_clifford_2016 Z. Webb, The clifford group forms a unitary 3-design (2016). https://doi.org/10.48550/arXiv.1510.02769 doi:10.48550/arXiv.1510.02769. zhu_clifford_2016 H. Zhu, R. Kueng, M. Grassl, D. Gross, The clifford group fails gracefully to be a unitary 4-design (2016). https://doi.org/10.48550/arXiv.1609.08172 doi:10.48550/arXiv.1609.08172. hassan_separability_2008 A. S. M. Hassan, P. S. Joag, Separability criterion for multipartite quantum states based on the bloch representation of density matrices (2008). http://arxiv.org/abs/0704.3942 arXiv:0704.3942. hassan_experimentally_2008 A. S. M. Hassan, P. S. Joag, Experimentally accessible geometric measure for entanglement in 𝑁-qubit pure states, Phys. Rev. A 77 (2008) 062334. https://doi.org/10.1103/PhysRevA.77.062334 doi:10.1103/PhysRevA.77.062334. hassan_geometric_2009 A. S. M. Hassan, P. S. Joag, Geometric measure for entanglement in N-qudit pure states, Phys. Rev. A 80 (4) (2009) 042302. https://doi.org/10.1103/PhysRevA.80.042302 doi:10.1103/PhysRevA.80.042302. nielsen_conditions_1999 M. A. Nielsen, Conditions for a Class of Entanglement Transformations, Phys. Rev. Lett. 83 (2) (1999) 436–439. https://doi.org/10.1103/physrevlett.83.436 doi:10.1103/physrevlett.83.436. horodecki_information-theoretic_1996 R. Horodecki, M. Horodecki, Information-theoretic aspects of inseparability of mixed states, Phys. Rev. A 54 (3) (1996) 1838–1843. https://doi.org/10.1103/PhysRevA.54.1838 doi:10.1103/PhysRevA.54.1838. dur_multiparticle_2001 W. Dür, J. I. Cirac, Multiparticle entanglement and its experimental detection, J. Phys. A 34 (35) (2001) 6837. https://doi.org/10.1088/0305-4470/34/35/310 doi:10.1088/0305-4470/34/35/310. guhne_multiparticle_2011 O. Gühne, B. Jungnitsch, T. Moroder, Y. S. Weinstein, Multiparticle entanglement in graph-diagonal states: Necessary and sufficient conditions for four qubits, Phys. Rev. A 84 (2011) 052319. https://doi.org/10.1103/PhysRevA.84.052319 doi:10.1103/PhysRevA.84.052319. eltschka_entanglement_2012 C. Eltschka, J. Siewert, Entanglement of Three-Qubit Greenberger-Horne-Zeilinger–Symmetric States, Phys. Rev. Lett. 108 (2012) 020502. https://doi.org/10.1103/PhysRevLett.108.020502 doi:10.1103/PhysRevLett.108.020502. augusiak_universal_2008 R. Augusiak, M. Demianowicz, P. Horodecki, Universal observable detecting all two-qubit entanglement and determinant-based separability tests, Phys. Rev. A 77 (3) (2008) 030301. https://doi.org/10.1103/PhysRevA.77.030301 doi:10.1103/PhysRevA.77.030301. lawson_reliable_2014 T. Lawson, A. Pappa, B. Bourdoncle, I. Kerenidis, D. Markham, E. Diamanti, Reliable experimental quantification of bipartite entanglement without reference frames, Phys. Rev. A 90 (4) (2014) 042336. https://doi.org/10.1103/PhysRevA.90.042336 doi:10.1103/PhysRevA.90.042336. hill_entanglement_1997 S. Hill, W. K. Wootters, Entanglement of a pair of quantum bits, Phys. Rev. Lett. 78 (26) (1997) 5022–h–5025. https://doi.org/10.1103/physrevlett.78.5022 doi:10.1103/physrevlett.78.5022. elben_renyi_2018 A. Elben, B. Vermersch, M. Dalmonte, J. I. Cirac, P. Zoller, Rényi Entropies from Random Quenches in Atomic Hubbard and Spin Models, Phys. Rev. Lett. 120 (5) (2018) 050406. https://doi.org/10.1103/PhysRevLett.120.050406 doi:10.1103/PhysRevLett.120.050406. imai_work_2023 S. Imai, O. Gühne, S. Nimmrichter, Work fluctuations and entanglement in quantum batteries, Phys. Rev. A 107 (2023) 022215. https://doi.org/10.1103/PhysRevA.107.022215 doi:10.1103/PhysRevA.107.022215. vollbrecht_conditional_2002 K. G. H. Vollbrecht, M. M. Wolf, Conditional entropies and their relation to entanglement criteria, Journal of Mathematical Physics 43 (9) (2002) 4299–4306. https://doi.org/10.1063/1.1498490 doi:10.1063/1.1498490. guhne_entropic_2004 O. Gühne, M. Lewenstein, Entropic uncertainty relations and entanglement, Phys. Rev. A 70 (2004) 022316. https://doi.org/10.1103/PhysRevA.70.022316 doi:10.1103/PhysRevA.70.022316. hiroshima_majorization_2003 T. Hiroshima, Majorization criterion for distillability of a bipartite quantum state, Phys. Rev. Lett. 91 (2003) 057902. https://doi.org/10.1103/PhysRevLett.91.057902 doi:10.1103/PhysRevLett.91.057902. de_vicente_separability_2007 J. de Vicente, Separability criteria based on the Bloch representation of density matrices, Quantum Information and Computation 7 (7) (2007) 624–638. https://doi.org/10.26421/QIC7.7-5 doi:10.26421/QIC7.7-5. liu_characterizing_2022 S. Liu, Q. He, M. Huber, O. Gühne, G. Vitagliano, Characterizing entanglement dimensionality from randomized measurements (2022). http://arxiv.org/abs/2211.09614 arXiv:2211.09614. bruss_construction_2000 D. Bruß, A. Peres, Construction of quantum states with bound entanglement, Phys. Rev. A 61 (2000) 030301. https://doi.org/10.1103/PhysRevA.61.030301 doi:10.1103/PhysRevA.61.030301. bound_experi_fourth_dete in preparation. markiewicz_detecting_2013 M. Markiewicz, W. Laskowski, T. Paterek, M.  ŻŻukowski, Detecting genuine multipartite entanglement of pure states with bipartite correlations, Phys. Rev. A 87 (2013) 034301. https://doi.org/10.1103/PhysRevA.87.034301 doi:10.1103/PhysRevA.87.034301. klockl_characterizing_2015 C. Klöckl, M. Huber, Characterizing multipartite entanglement without shared reference frames, Phys. Rev. A 91 (4) (2015) 042339. https://doi.org/10.1103/PhysRevA.91.042339 doi:10.1103/PhysRevA.91.042339. huber_some_2018 F. Huber, S. Severini, Some ulam’s reconstruction problems for quantum states, J. Phys. A 51 (43) (2018) 435301. https://doi.org/10.1088/1751-8121/aadd1e doi:10.1088/1751-8121/aadd1e. miller_sector_2022 D. Miller, D. Loss, I. Tavernelli, H. Kampermann, D. Bruß, N. Wyderka, Sector length distributions of graph states (2022). https://doi.org/10.48550/arXiv.2207.07665 doi:10.48550/arXiv.2207.07665. ketterer_statistically_2022 A. Ketterer, S. Imai, N. Wyderka, O. Gühne, Statistically significant tests of multiparticle quantum correlations based on randomized measurements, Phys. Rev. A 106 (1) (July 2022). https://doi.org/10.1103/physreva.106.l010402 doi:10.1103/physreva.106.l010402. lohmayer_entangled_2006 R. Lohmayer, A. Osterloh, J. Siewert, A. Uhlmann, Entangled three-qubit states without concurrence and three-tangle, Phys. Rev. Lett. 97 (2006) 260502. https://doi.org/10.1103/PhysRevLett.97.260502 doi:10.1103/PhysRevLett.97.260502. knips_multipartite_2020 L. Knips, J. Dziewior, W. Kłobus, W. Laskowski, T. Paterek, P. J. Shadbolt, H. Weinfurter, J. D. A. Meinecke, Multipartite entanglement analysis from random correlations, npj Quantum Information 6 (1) (2020) 51. https://doi.org/10.1038/s41534-020-0281-5 doi:10.1038/s41534-020-0281-5. knips_multipartite_2016 L. Knips, C. Schwemmer, N. Klein, M. Wieśniak, H. Weinfurter, Multipartite Entanglement Detection with Minimal Effort, Phys. Rev. Lett. 117 (21) (2016) 210504. https://doi.org/10.1103/PhysRevLett.117.210504 doi:10.1103/PhysRevLett.117.210504. chitambar_quantum_2019 E. Chitambar, G. Gour, Quantum resource theories, Reviews of Modern Physics 91 (2019) 025001. https://doi.org/10.1103/RevModPhys.91.025001 doi:10.1103/RevModPhys.91.025001. acin_classification_2001 A. Acín, D. Bruß, M. Lewenstein, A. Sanpera, Classification of Mixed Three-Qubit States, Phys. Rev. Lett. 87 (4) (2001) 040401. https://doi.org/10.1103/PhysRevLett.87.040401 doi:10.1103/PhysRevLett.87.040401. cleve_how_1999 R. Cleve, D. Gottesman, H.-K. Lo, How to share a quantum secret, Phys. Rev. Lett. 83 (1999) 648–651. https://doi.org/10.1103/PhysRevLett.83.648 doi:10.1103/PhysRevLett.83.648. raussendorf_oneway_2001 R. Raussendorf, H. J. Briegel, A one-way quantum computer, Phys. Rev. Lett. 86 (2001) 5188–5191. https://doi.org/10.1103/PhysRevLett.86.5188 doi:10.1103/PhysRevLett.86.5188. lvovsky_optical_2009 A. I. Lvovsky, B. C. Sanders, W. Tittel, Optical quantum memory, Nature Photonics 3 (12) (2009) 706–714. https://doi.org/10.1038/nphoton.2009.231 doi:10.1038/nphoton.2009.231. verstraete_four_2002 F. Verstraete, J. Dehaene, B. De Moor, H. Verschelde, Four qubits can be entangled in nine different ways, Phys. Rev. A 65 (2002) 052112. https://doi.org/10.1103/PhysRevA.65.052112 doi:10.1103/PhysRevA.65.052112. ketterer_entanglement_2020 A. Ketterer, N. Wyderka, O. Gühne, Entanglement characterization using quantum designs, Quantum 4 (2020) 325. https://doi.org/10.22331/q-2020-09-16-325 doi:10.22331/q-2020-09-16-325. flammia_direct_2011 S. T. Flammia, Y.-K. Liu, Direct fidelity estimation from few pauli measurements, Phys. Rev. Lett. 106 (2011) 230501. https://doi.org/10.1103/PhysRevLett.106.230501 doi:10.1103/PhysRevLett.106.230501. pallister_optimal_2018 S. Pallister, N. Linden, A. Montanaro, Optimal verification of entangled states with local measurements, Phys. Rev. Lett. 120 (2018) 170502. https://doi.org/10.1103/PhysRevLett.120.170502 doi:10.1103/PhysRevLett.120.170502. elben_statistical_2019 A. Elben, B. Vermersch, C. F. Roos, P. Zoller, Statistical correlations between locally randomized measurements: A toolbox for probing entanglement in many-body quantum states, Phys. Rev. A 99 (5) (2019) 052323. https://doi.org/10.1103/PhysRevA.99.052323 doi:10.1103/PhysRevA.99.052323. elben_cross-platform_2020 A. Elben, B. Vermersch, R. van Bijnen, C. Kokail, T. Brydges, C. Maier, M. K. Joshi, R. Blatt, C. F. Roos, P. Zoller, Cross-Platform Verification of Intermediate Scale Quantum Devices, Phys. Rev. Lett. 124 (1) (2020) 010504. https://doi.org/10.1103/PhysRevLett.124.010504 doi:10.1103/PhysRevLett.124.010504. yu_statistical_2022 X.-D. Yu, J. Shang, O. Gühne, Statistical methods for quantum state verification and fidelity estimation, Advanced Quantum Technologies 5 (5) (2022) 2100126. https://doi.org/10.1002/qute.202100126 doi:10.1002/qute.202100126. dimic_single-copy_2018 A. Dimić, B. Dakić, Single-copy entanglement detection, npj Quantum Information 4 (1) (2018) 11. https://doi.org/10.1038/s41534-017-0055-x doi:10.1038/s41534-017-0055-x. saggio_experimental_2019 V. Saggio, A. Dimić, C. Greganti, L. A. Rozema, P. Walther, B. Dakić, Experimental few-copy multipartite entanglement detection, Nature Physics 15 (9) (2019) 935–940. https://doi.org/10.1038/s41567-019-0550-4 doi:10.1038/s41567-019-0550-4. cieslinski_high_2022 P. Cieśliński, J. Dziewior, L. Knips, W. Kłobus, J. Meinecke, T. Paterek, H. Weinfurter, W. Laskowski, High validity entanglement verification with finite copies of a quantum state, arXiv (2022). https://doi.org/10.48550/arXiv.2208.01983 doi:10.48550/arXiv.2208.01983. liang_quantum_2019 Y.-C. Liang, Y.-H. Yeh, P. E. M. F. Mendonça, R. Y. Teh, M. D. Reid, P. D. Drummond, Quantum fidelity measures for mixed states, Reports on Progress in Physics 82 (7) (2019) 076001. https://doi.org/10.1088/1361-6633/ab1ca4 doi:10.1088/1361-6633/ab1ca4. jozsa_fidelity_1994 R. Jozsa, Fidelity for mixed quantum states, Journal of Modern Optics 41 (12) (1994) 2315–2323. https://doi.org/10.1080/09500349414552171 doi:10.1080/09500349414552171. uhlmann_transition_1976 A. Uhlmann, The “transition probability” in the state space of a ∗-algebra, Reports on Mathematical Physics 9 (2) (1976) 273–279. https://doi.org/10.1016/0034-4877(76)90060-4 doi:10.1016/0034-4877(76)90060-4. elben_many-body_2020 A. Elben, J. Yu, G. Zhu, M. Hafezi, F. Pollmann, P. Zoller, B. Vermersch, Many-body topological invariants from randomized measurements in synthetic quantum matter, Science Advances 6 (15) (2020) eaaz3666. https://doi.org/10.1126/sciadv.aaz3666 doi:10.1126/sciadv.aaz3666. pollmann_detection_2012 F. Pollmann, A. M. Turner, Detection of symmetry-protected topological phases in one dimension, Phys. Rev. B 86 (12) (2012) 125441. https://doi.org/10.1103/PhysRevB.86.125441 doi:10.1103/PhysRevB.86.125441. mezzadri_how_2007 F. Mezzadri, http://arxiv.org/abs/math-ph/0609050How to generate random matrices from the classical compact groups, arXiv:math-ph/0609050 (February 2007). <http://arxiv.org/abs/math-ph/0609050> nandkishore_many_2015 R. Nandkishore, D. A. Huse, Many-body localization and thermalization in quantum statistical mechanics, Annual Review of Condensed Matter Physics 6 (1) (2015) 15–38. https://doi.org/10.1146/annurev-conmatphys-031214-014726 doi:10.1146/annurev-conmatphys-031214-014726. helstrom_quantum_1969 C. W. Helstrom, Quantum detection and estimation theory, Journal of Statistical Physics 1 (2) (1969) 231–252. https://doi.org/10.1007/bf01007479 doi:10.1007/bf01007479. braunstein_statistical_1994 S. L. Braunstein, C. M. Caves, Statistical distance and the geometry of quantum states, Phys. Rev. Lett. 72 (1994) 3439–3443. https://doi.org/10.1103/PhysRevLett.72.3439 doi:10.1103/PhysRevLett.72.3439. liu_quantum_2019 J. Liu, H. Yuan, X.-M. Lu, X. Wang, Quantum fisher information matrix and multiparameter estimation, J. Phys. A 53 (2) (2019) 023001. https://doi.org/10.1088/1751-8121/ab5d4d doi:10.1088/1751-8121/ab5d4d. yu_experimental_2021 M. Yu, D. Li, J. Wang, Y. Chu, P. Yang, M. Gong, N. Goldman, J. Cai, Experimental estimation of the quantum fisher information from randomized measurements, Phys. Rev. Res. 3 (2021) 043122. https://doi.org/10.1103/PhysRevResearch.3.043122 doi:10.1103/PhysRevResearch.3.043122. rath_quantum_2021 A. Rath, C. Branciard, A. Minguzzi, B. Vermersch, Quantum Fisher Information from Randomized Measurements, Phys. Rev. Lett. 127 (26) (2021) 260501. https://doi.org/10.1103/PhysRevLett.127.260501 doi:10.1103/PhysRevLett.127.260501. nie_detecting_2019 X. Nie, Z. Zhang, X. Zhao, T. Xin, D. Lu, J. Li, Detecting scrambling via statistical correlations between randomized measurements on an nmr quantum simulator (2019). http://arxiv.org/abs/1903.12237 arXiv:1903.12237. joshi_quantum_2020 M. K. Joshi, A. Elben, B. Vermersch, T. Brydges, C. Maier, P. Zoller, R. Blatt, C. F. Roos, https://doi.org/10.1103/physrevlett.124.240505Quantum Information Scrambling in a Trapped-Ion Quantum Simulator with Tunable Range Interactions, Phys. Rev. Lett. 124 (24) (June 2020). <https://doi.org/10.1103/physrevlett.124.240505> huang_predicting_2020 H.-Y. Huang, R. Kueng, J. Preskill, Predicting many properties of a quantum system from very few measurements, Nature Physics 16 (10) (2020) 1050–1057. https://doi.org/10.1038/s41567-020-0932-7 doi:10.1038/s41567-020-0932-7. horodecki_general_1999 M. Horodecki, P. Horodecki, R. Horodecki, General teleportation channel, singlet fraction, and quasidistillation, Phys. Rev. A 60 (1999) 1888–1898. https://doi.org/10.1103/PhysRevA.60.1888 doi:10.1103/PhysRevA.60.1888. guhne_geometry_2021 O. Gühne, Y. Mao, X.-D. Yu, Geometry of faithful entanglement, Phys. Rev. Lett. 126 (2021) 140503. https://doi.org/10.1103/PhysRevLett.126.140503 doi:10.1103/PhysRevLett.126.140503. james_measurement_2001 D. F. V. James, P. G. Kwiat, W. J. Munro, A. G. White, Measurement of qubits, Phys. Rev. A 64 (2001) 052312. https://doi.org/10.1103/PhysRevA.64.052312 doi:10.1103/PhysRevA.64.052312. dariano_quantum_2003 G. M. D'Ariano, M. G. Paris, M. F. Sacchi, Quantum tomography, Advances in imaging and electron physics 128 (2003) 206–309. https://doi.org/10.48550/arXiv.quant-ph/0302028 doi:10.48550/arXiv.quant-ph/0302028. paris_quantum_2004 M. Paris, J. Rehacek, Quantum state estimation, Vol. 649, Springer Science & Business Media, 2004. https://doi.org/10.1007/b98673 doi:10.1007/b98673. cramer_efficient_2010 M. Cramer, M. B. Plenio, S. T. Flammia, R. Somma, D. Gross, S. D. Bartlett, O. Landon-Cardinal, D. Poulin, Y.-K. Liu, Efficient quantum state tomography, Nature Communications 1 (1) (December 2010). https://doi.org/10.1038/ncomms1147 doi:10.1038/ncomms1147. gross_quantum_2010 D. Gross, Y.-K. Liu, S. T. Flammia, S. Becker, J. Eisert, Quantum state tomography via compressed sensing, Phys. Rev. Lett. 105 (2010) 150401. https://doi.org/10.1103/PhysRevLett.105.150401 doi:10.1103/PhysRevLett.105.150401. toth_permutationally_2010 G. Tóth, W. Wieczorek, D. Gross, R. Krischek, C. Schwemmer, H. Weinfurter, Permutationally invariant quantum tomography, Phys. Rev. Lett. 105 (2010) 250403. https://doi.org/10.1103/PhysRevLett.105.250403 doi:10.1103/PhysRevLett.105.250403. moroder_permutationally_2012 T. Moroder, P. Hyllus, G. Tóth, C. Schwemmer, A. Niggebaum, S. Gaile, O. Gühne, H. Weinfurter, Permutationally invariant state reconstruction, New J. Phys. 14 (10) (2012) 105001. https://doi.org/10.1088/1367-2630/14/10/105001 doi:10.1088/1367-2630/14/10/105001. aaronson_shadow_2020 S. Aaronson, Shadow tomography of quantum states, SIAM Journal on Computing 49 (5) (2020) STOC18–368–STOC18–394. https://doi.org/10.1137/18M120275X doi:10.1137/18M120275X. nguyen_optimizing_2022 H. C. Nguyen, J. L. Bönsel, J. Steinberg, O. Gühne, Optimizing Shadow Tomography with Generalized Measurements, Phys. Rev. Lett. 129 (22) (2022) 220502. https://doi.org/10.1103/PhysRevLett.129.220502 doi:10.1103/PhysRevLett.129.220502. bishop_pattern_2006 C. M. Bishop, N. M. Nasrabadi, Pattern recognition and machine learning, Springer, 2006. zhang_experimental_2021 T. Zhang, J. Sun, X.-X. Fang, X.-M. Zhang, X. Yuan, H. Lu, Experimental quantum state measurement with classical shadows, Phys. Rev. Lett. 127 (20) (November 2021). https://doi.org/10.1103/physrevlett.127.200501 doi:10.1103/physrevlett.127.200501. struchalin_experimental_2021 G. Struchalin, Y. A. Zagorovskii, E. Kovlakov, S. Straupe, S. Kulik, Experimental estimation of quantum state properties from classical shadows, PRX Quantum 2 (2021) 010307. https://doi.org/10.1103/PRXQuantum.2.010307 doi:10.1103/PRXQuantum.2.010307. stricker_experimental_2022 R. Stricker, M. Meth, L. Postler, C. Edmunds, C. Ferrie, R. Blatt, P. Schindler, T. Monz, R. Kueng, M. Ringbauer, Experimental single-setting quantum state tomography, PRX Quantum 3 (2022) 040310. https://doi.org/10.1103/PRXQuantum.3.040310 doi:10.1103/PRXQuantum.3.040310. rath_entanglement_2023 A. Rath, V. Vitale, S. Murciano, M. Votto, J. Dubail, R. Kueng, C. Branciard, P. Calabrese, B. Vermersch, Entanglement barrier and its symmetry resolution: Theory and experimental observation, PRX Quantum 4 (2023) 010318. https://doi.org/10.1103/PRXQuantum.4.010318 doi:10.1103/PRXQuantum.4.010318. lipinska_towards_2018 V. Lipinska, F. J. Curchod, A. Máttar, A. Acín, Towards an equivalence between maximal entanglement and maximal quantum nonlocality, New J. Phys. 20 (6) (2018) 063043. https://doi.org/10.1088/1367-2630/aaca22 doi:10.1088/1367-2630/aaca22. barasinski_genuine_2020 A. Barasi ńński, A.  ČČernoch, K. Lemr, J. Soubusta, Genuine tripartite nonlocality for random measurements in greenberger-horne-zeilinger-class states and its experimental test, Phys. Rev. A 101 (2020) 052109. https://doi.org/10.1103/PhysRevA.101.052109 doi:10.1103/PhysRevA.101.052109. barasinski_experimentally_2021 A. Barasiński, A. Černoch, W. Laskowski, K. Lemr, T. Vértesi, J. Soubusta, Experimentally friendly approach towards nonlocal correlations in multisetting N-partite Bell scenarios, Quantum 5 (2021) 430. https://doi.org/10.22331/q-2021-04-14-430 doi:10.22331/q-2021-04-14-430. mermin_extreme_1990 N. D. Mermin, Extreme quantum entanglement in a superposition of macroscopically distinct states, Phys. Rev. Lett. 65 (15) (1990) 1838–1840. https://doi.org/10.1103/PhysRevLett.65.1838 doi:10.1103/PhysRevLett.65.1838. ardehali_bell_1992 M. Ardehali, Bell inequalities with a magnitude of violation that grows exponentially with the number of particles, Phys. Rev. A 46 (9) (1992) 5375–5378. https://doi.org/10.1103/PhysRevA.46.5375 doi:10.1103/PhysRevA.46.5375. belinskii_interference_1993 A. V. Belinskiĭ, D. N. Klyshko, Interference of light and Bell's theorem, Physics-Uspekhi 36 (8) (1993) 653. https://doi.org/10.1070/PU1993v036n08ABEH002299 doi:10.1070/PU1993v036n08ABEH002299. weinfurter_four-photon_2001 H. Weinfurter, M. Żukowski, Four-photon entanglement from down-conversion, Phys. Rev. A 64 (1) (2001) 010102. https://doi.org/10.1103/PhysRevA.64.010102 doi:10.1103/PhysRevA.64.010102. werner_all-multipartite_2001 R. F. Werner, M. M. Wolf, All-multipartite Bell-correlation inequalities for two dichotomic observables per site, Phys. Rev. A 64 (3) (2001) 032112. https://doi.org/10.1103/PhysRevA.64.032112 doi:10.1103/PhysRevA.64.032112. de_rosier_strength_2020 A. de Rosier, J. Gruca, F. Parisio, T. Vértesi, W. Laskowski, Strength and typicality of nonlocality in multisetting and multipartite Bell scenarios, Phys. Rev. A 101 (1) (2020) 012116. https://doi.org/10.1103/PhysRevA.101.012116 doi:10.1103/PhysRevA.101.012116. patrick_nonlocality_2022 A. Patrick, G. Camillo, F. Parisio, B. Amaral, From nonlocality quantifiers for behaviors to nonlocality quantifiers for states (2022). http://arxiv.org/abs/2204.06537 arXiv:2204.06537. brito_quantifying_2018 S. G. A. Brito, B. Amaral, R. Chaves, Quantifying bell nonlocality with the trace distance, Phys. Rev. A 97 (2018) 022111. https://doi.org/10.1103/PhysRevA.97.022111 doi:10.1103/PhysRevA.97.022111. yu_all_2012 S. Yu, Q. Chen, C. Zhang, C. H. Lai, C. H. Oh, All Entangled Pure States Violate a Single Bell's Inequality, Phys. Rev. Lett. 109 (12) (September 2012). https://doi.org/10.1103/physrevlett.109.120402 doi:10.1103/physrevlett.109.120402. laskowski_highly_2015 W. Laskowski, T. Vértesi, M. Wieśniak, Highly noise resistant multiqubit quantum correlations, J. Phys. A 48 (46) (2015) 465301. kaszlikowski_violations_2000 D. Kaszlikowski, P. Gnaci ńński, M.  ŻŻukowski, W. Miklaszewski, A. Zeilinger, Violations of local realism by two entangled 𝑁-dimensional systems are stronger than for two qubits, Phys. Rev. Lett. 85 (2000) 4418–4421. https://doi.org/10.1103/PhysRevLett.85.4418 doi:10.1103/PhysRevLett.85.4418. de_rosier_multipartite_2017 A. de Rosier, J. Gruca, F. Parisio, T. Vértesi, W. Laskowski, Multipartite nonlocality and random measurements, Phys. Rev. A 96 (1) (2017) 012101. https://doi.org/10.1103/PhysRevA.96.012101 doi:10.1103/PhysRevA.96.012101. gruca_nonclassicality_2010 J. Gruca, W. Laskowski, M. Żukowski, N. Kiesel, W. Wieczorek, C. Schmid, H. Weinfurter, Nonclassicality thresholds for multiqubit states: Numerical analysis, Phys. Rev. A 82 (1) (2010) 012118. https://doi.org/10.1103/PhysRevA.82.012118 doi:10.1103/PhysRevA.82.012118. pironio_lifting_2005 S. Pironio, Lifting Bell Inequalities, J. Math. Phys. 46 (2005) 062112. https://doi.org/10.1063/1.1928727 doi:10.1063/1.1928727. collins_bell_2002 D. Collins, N. Gisin, N. Linden, S. Massar, S. Popescu, Bell Inequalities for Arbitrarily High-Dimensional Systems, Phys. Rev. Lett. 88 (4) (2002) 040404. https://doi.org/10.1103/PhysRevLett.88.040404 doi:10.1103/PhysRevLett.88.040404. fonseca_survey_2018 A. Fonseca, A. de Rosier, T. Vértesi, W. Laskowski, F. Parisio, Survey on the Bell nonlocality of a pair of entangled qudits, Phys. Rev. A 98 (4) (2018) 042105. https://doi.org/10.1103/PhysRevA.98.042105 doi:10.1103/PhysRevA.98.042105. yang_device-independent_2020 S.-X. Yang, G. N. Tabia, P.-S. Lin, Y.-C. Liang, Device-independent certification of multipartite entanglement using measurements performed in randomly chosen triads, Phys. Rev. A 102 (2020) 022419. https://doi.org/10.1103/PhysRevA.102.022419 doi:10.1103/PhysRevA.102.022419. tschaffon_average_2023 M. E. N. Tschaffon, J. Seiler, M. Freyberger, Average correlation as an indicator for nonclassicality, Phys. Rev. Res. 5 (2) (2023) 023063. https://doi.org/10.1103/PhysRevResearch.5.023063 doi:10.1103/PhysRevResearch.5.023063. svetlichny_distinguishing_1987 G. Svetlichny, Distinguishing three-body from two-body nonseparability by a bell-type inequality, Phys. Rev. D 35 (1987) 3066–3069. https://doi.org/10.1103/PhysRevD.35.3066 doi:10.1103/PhysRevD.35.3066. almeida_multipartite_2010 M. L. Almeida, D. Cavalcanti, V. Scarani, A. Acín, Multipartite fully nonlocal quantum states, Phys. Rev. A 81 (2010) 052111. https://doi.org/10.1103/PhysRevA.81.052111 doi:10.1103/PhysRevA.81.052111. bancal_definitions_2013 J.-D. Bancal, J. Barrett, N. Gisin, S. Pironio, Definitions of multipartite nonlocality, Phys. Rev. A 88 (2013) 014102. https://doi.org/10.1103/PhysRevA.88.014102 doi:10.1103/PhysRevA.88.014102. senel_demonstrating_2015 C. F. Senel, T. Lawson, M. Kaplan, D. Markham, E. Diamanti, Demonstrating genuine multipartite entanglement and nonseparability without shared reference frames, Phys. Rev. A 91 (2015) 052118. https://doi.org/10.1103/PhysRevA.91.052118 doi:10.1103/PhysRevA.91.052118. pandit_optimal_2022 M. Pandit, A. Barasiński, I. Márton, T. Vértesi, W. Laskowski, Optimal tests of genuine multipartite nonlocality, New J. Phys. 24 (12) (2022) 123017. https://doi.org/10.1088/1367-2630/aca8c8 doi:10.1088/1367-2630/aca8c8. wallman_observers_2012 J. J. Wallman, S. D. Bartlett, Observers can always generate nonlocal correlations without aligning measurements by covering all their bases, Phys. Rev. A 85 (2) (2012) 024101. https://doi.org/10.1103/PhysRevA.85.024101 doi:10.1103/PhysRevA.85.024101. tabia_bell_2022 G. N. M. Tabia, V. S. R. Bavana, S.-X. Yang, Y.-C. Liang, Bell inequality violations with random mutually unbiased bases, Phys. Rev. A 106 (1) (2022) 012209. https://doi.org/10.1103/PhysRevA.106.012209 doi:10.1103/PhysRevA.106.012209. tavakoli_mutually_2021 A. Tavakoli, M. Farkas, D. Rosset, J.-D. Bancal, J. Kaniewski, Mutually unbiased bases and symmetric informationally complete measurements in bell experiments, Science Advances 7 (7) (feb 2021). https://doi.org/10.1126/sciadv.abc3847 doi:10.1126/sciadv.abc3847. wang_experimental_2016 Z. Wang, C. Zhang, Y.-F. Huang, B.-H. Liu, C.-F. Li, G.-C. Guo, Experimental demonstration of genuine multipartite quantum nonlocality without shared reference frames, Phys. Rev. A 93 (2016) 032127. https://doi.org/10.1103/PhysRevA.93.032127 doi:10.1103/PhysRevA.93.032127. andreoli_experimental_2017 F. Andreoli, G. Carvacho, L. Santodonato, M. Bentivegna, R. Chaves, F. Sciarrino, Experimental bilocality violation without shared reference frames, Phys. Rev. A 95 (2017) 062315. https://doi.org/10.1103/PhysRevA.95.062315 doi:10.1103/PhysRevA.95.062315.
http://arxiv.org/abs/2307.02639v1
20230705201527
Energy Transfer in Random-Matrix ensembles of Floquet Hamiltonians
[ "Christina Psaroudaki", "Gil Refael" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall", "quant-ph" ]
Laboratoire de Physique de l’École normale supérieure, ENS, Université PSL, CNRS, Sorbonne Université, Université de Paris, F-75005 Paris, France Department of Physics and Institute for Quantum Information and Matter, California Institute of Technology, Pasadena, CA 91125, USA Walter Burke Institute for Theoretical Physics, California Institute of Technology, Pasadena, CA 91125, USA We explore the statistical properties of energy transfer in ensembles of doubly-driven Random-Matrix Floquet Hamiltonians, based on universal symmetry arguments. The energy pumping efficiency distribution P(E̅) is associated with the Hamiltonian parameter ensemble and the eigenvalue statistics of the Floquet operator. For specific Hamiltonian ensembles, P(E̅) undergoes a transition which cannot be associated with a symmetry breaking of the instantaneous Hamiltonian. The Floquet eigenvalue spacing distribution indicates the considered ensembles constitute generic nonintegrable Hamiltonian families. As a step towards Hamiltonian engineering, we develop a machine-learning classifier to understand the relative parameter importance in resulting high conversion efficiency. We propose Random Floquet Hamiltonians as a general framework to investigate frequency conversion effects in a new class of generic dynamical processes beyond adiabatic pumps. Energy Transfer in Random-Matrix ensembles of Floquet Hamiltonians Gil Refael August 1, 2023 ================================================================== § INTRODUCTION Periodic driving of a quantum system is a versatile tool for its coherent control, allowing one to engineer quantum phases of matter with various applications <cit.>. It opens up the possibility to artificially realize exotic topological systems <cit.>, many of which have no static analog <cit.>. Among them, a class of double-drive Hamiltonians displays quantized adiabatic pumping of energy between the two drives <cit.>, in close analogy with the Thouless topological charge pumping <cit.> and its inverse effect in adiabatic quantum motors <cit.>. Our contribution to the collection in honor of Emmanuel Rashba focuses on energy pumping in doubly-driven systems. Indeed, energy pumping between multiple drives, is analogous to the anomalous Hall effect in spin orbit coupled bands. With this, the seminal work of Rashba on spin-orbit effects in solids<cit.> finds an application in the synthetic-dimension picture of multiply driven systems. Energy pumping between multiple drives could be an crucial element for quantum machines and amplifiers at the terahertz regime. Quantized energy flow between two incommensurate drives is predicted in temporal analogs of two-dimensional topological insulators <cit.>. Nevertheless, quantized pumping emerges as long as the system is in the near-adiabatic limit, during which any instantaneous bulk gap is maintained, and is restricted by the topology of the relevant band <cit.>. Quantized energy transfer has been predicted only for specific double-drive topological models inside the model's topological phase and for irrationally-related drive frequencies <cit.>. Energy flow outside this relatively small part of the parameter space is practically unexplored and is not expected to remain robust to nonadiabatic driving conditions <cit.>. In this limit, the regime of harmonic frequency ratios is particularly interesting as it allows an energy conversion rate exceeding the quantized value in both the topological and trivial class <cit.> and a sustained response in the nonadiabatic regime <cit.>. Here, we propose Random Floquet Hamiltonians as a general framework to investigate frequency conversion effects in a relatively large parameter space and a powerful tool to explore a new class of generic dynamical processes beyond adiabatic pumps. Since Wigner’s original proposal on the use of random matrices to describe properties of highly excited nuclear levels in complex nuclei <cit.>, Random Matrix Theory (RMT) has been applied in a variety of physical problems, including quantum transport <cit.> and quantum chaotic systems <cit.>. Inspired by the universality of RMT, we study the statistical properties of the energy-pumping effect for an ensemble of doubly-driven Random Floquet Hamiltonians. Of primary interest is the characterization of the energy pumping efficiency distribution and its relation to Hamiltonian distributions and Floquet level statistics. We use the basic properties of RMT as a standard diagnostic tool of generic nonintegrable ensembles. From an analysis of various instantaneous Hamiltonian symmetries, it follows that the energy pumping efficiency distribution P(E̅) depends on the Hamiltonian parameter ensemble. For a Gaussian Hamiltonian distribution, the energy pumping has no linear correlation to the Hamiltonian norm. Remarkably, for a spherical Hamiltonian ensemble and a Hamiltonian ensemble with complex parameters, P(E̅) undergoes a transition that cannot be associated with a symmetry breaking of the instantaneous Hamiltonian. The Floquet spacing statistics exhibit a linear (quadratic) level repulsion at small spacings for Hamiltonians with real (complex) parameters, and its form indicates the considered ensemble constitutes a generic Hamiltonian family. In all cases we considered, we find the universal behavior P(E̅) = f(E̅/β) with β∝σ^4, and σ being a scale parameter specific to the distribution. As a step towards Hamiltonian engineering targeting high conversion efficiency, we develop a machine-learning model classifier to extract the importance of each Hamiltonian parameter, applied to a class of random temporal topological models. Our results can be implemented in a variety of double-drive two-level systems, including a spin-1/2 <cit.>, single-qubit systems <cit.> and non-interacting atoms trapped in an optical cavity <cit.>. The structure of the paper is as follows. In Sec. <ref> we introduce the energy pumping in a family of two-frequency Hamiltonians. In Sec. <ref> we numerically calculate the energy pumping efficiency, while in Sec. <ref> we develop a machine-learning classifier. A discussion on analytical bounds is included in Sec. <ref>. Our main conclusions are summarized in Sec. <ref>, while some technical details are deferred to three Appendices. § FREQUENCY CONVERSION Our analysis begins by considering a family of two-frequency Hermitian 2× 2 Hamiltonians H(t)=F_0 + H_1(ω_1 t +ϕ_1)+H_2(ω_2 t +ϕ_2) , with H_i being periodic function of time with frequency ω_i and phase ϕ_i. The main quantity of interest is the integrated power absorbed (or spent) by the drive i=1,2, E_i (t)=∫_0^t dt' ⟨ψ(t') |d H_i(t')/dt'|ψ (t') ⟩ , where |ψ (t) ⟩ = U(t) |ψ_0 ⟩ is the instantaneous eigenstate, U(t) = 𝒯exp[-i ∫_0^t H(t') dt'] is the time evolution operator with 𝒯 the time-ordering operator, and |ψ_0 ⟩ the initial state. Each E_i(t) depends linearly on time, although the rates of work performed by the two sources add up approximately to zero, E̅= lim_t→∞E_1(t)/t = -lim_t→∞E_2(t)/t. We aim to characterize the frequency conversion efficiency E̅ for generic Floquet Hamiltonians. In the special case of the temporal analog of the topological Bernevig-Hughes-Zhang (BHZ) model <cit.> the Hall response translates to a quantized pumping of energy E̅= ω_1 ω_2/2π=E_Q which emerges in the adiabatic limit η≫ω_i, with η the driving amplitude, rationally independent frequencies ω_1/ω_2 ≡γ with γ∉ℚ, and topological regime <cit.>. Rationally-related frequencies ω_1/ω_2 ≡ q/p for p,q ∈ℤ, exhibit a sustained response which could exceed the quantized value E_Q in both the topological and trivial regime <cit.>, as well as in the nonadiabatic limit η≪ω_i <cit.>. Details of the energy pumping for the temporal BHZ model are summarized in the Appendix <ref>. The goal of the present work is to characterize generic frequency conversion dynamical processes generated by random Floquet Hamiltonians. In the most general case, adiabatic cycles where there is always an energy gap in the instantaneous spectrum should not be expected. It thus appears promising to focus on rationally-related frequencies such that the system is strictly periodic with a period equal to T ≡ 2π/pω_1 = 2π/q ω_2. To maximize the energy transfer, we consider Floquet eigenstate initialization with |ψ_0 ⟩ the lowest of the two Floquet eigenstates U(T)|ψ_n ⟩ =e^-i ε_n T|ψ_n ⟩. U(T) is the Floquet single-period evolution operator <cit.>. When the two frequencies form a rational fraction, a further simplification of the energy formula can be obtained. For a general multi-driven Floquet problem it holds E_i (t)=∫_0^t dt' ⟨ψ(t') |ω_i∂ H(t') /∂ϕ_i|ψ(t') ⟩ =ω_i∫_0^t dt' ⟨ψ(0) | U^†(t') ∂ H(t') /∂ϕ_i U(t')|ψ(0)⟩ =iω_i⟨ψ(0) | U^†(t) ∂ U(t) /∂ϕ_i|ψ(0) ⟩ =iω_i⟨ψ(0) |∂log U(t) /∂ϕ_i|ψ(0) ⟩=iω_i∂/∂ϕ_i⟨ψ(0) |log U(t)|ψ(0) ⟩ . Thus, the work done is related directly to the dependence of the trace log of the evolution operator. In the above we used ∂ U(t)/∂ϕ_i=∫_0^t dt' U(t',t) (-i ∂ H(t') /∂ϕ_i)U(0,t') with U(t_1,t_2) the propagator between the times t_1 and t_2. Next, let us concentrate on the rational-fraction case, where the system has a time-periodic Hamiltonian with period T. Also, we assume that the system is initiated into a Floquet eigenstate |ψ_ε⟩ with Floquet eigenenergy ε. Following from Eq. (<ref>) E_i (T)=iω_i⟨ψ_ε|∂log U(T) /∂ϕ_i|ψ_ε⟩ =iω_i⟨ψ_ε|∂/∂ϕ_i(log U(T)|ψ_ε) ⟩- iω_i⟨ψ_ε|log U(T) ∂|ψ_ε⟩/∂ϕ_i =Tω_i ∂ε/∂ϕ_i where T is the period of the combined drive. The average power exchanged by the drives is therefore E̅_i=ω_i ∂ε/∂ϕ_i . § RANDOM FLOQUET ENSEMBLES §.§ Gaussian Ensemble With these preliminary remarks, we now begin our analysis by considering the model of Eq. (<ref>) with [ H_0+H_1 = F_0+F_1 cos(ω_1 t +ϕ_1); H_2 =F_2 cos(ω_2 t +ϕ_2),; F_i = h_0^i1 + h_x^i σ_x + h_y^i σ_y+h_z^i σ_z. ] Here σ_i are the Pauli matrices. All h_j^i are chosen from a Gaussian distribution P(h_i^j) = e^-(h_i^j -c)^2/2 σ^2. We use ω_1/ω_2 =2/3, ω_2=1 and ϕ_2 =0=ϕ_1 throughout, and sample over N=12000 realizations of H(t). In Fig. <ref> we depict the energy transfer between the two drives E_i(t) for four distinct Floquet systems chosen from a Gaussian distribution with c=1 and σ=1, but with approximately the same amplitude F̅=( F_0+F_1+F_2) /3≈ 0.7. The resulting energy pumping efficiency varies substantially between the different realizations and could exceed the quantized value E_Q, indicating that the various Floquet states exhibit distinct dynamical behavior. In Fig. <ref> we summarize the statistical properties of the energy pumping efficiency distribution P(E̅). For any value of σ and c, P(E̅) is well approximated by a Lorentzian curve P(E̅) ∝ [1+E̅^2/γ^2]^-1. In Fig. <ref>-a) we depict P(E̅) for c=1 and σ=1 described by γ=0.024. Since we are interested in the nonadiabatic limit c ≪ω_i, we study the energy pumping efficiency for c=0, with the relevant scale now given by σ. The scale parameter γ plotted in Fig. <ref>-b) grows as γ∝σ^4 for σ≪ 1 and decreases for σ≳ 0.5. The normalized distribution P(E̅/F_i) is described by γ_n with a similar behavior. For a given σ=1, γ is an increasing function of c and γ_n a weakly-dependent [see Fig. <ref>-c)], indicating that the Hamiltonian strength dominates the pumping strength at the nonadiabatic regime. The considered model belongs in the trivial dynamical class C=0, where C is the Chern number associated with the instantaneous ground state band. In the Appendix <ref> we discuss the geometric aspects of the energy pumping effect encoded in the Berry curvature of the quasienergy state for various Hamiltonian realizations, and provide analytical expressions for C. As the universality of transport properties is related to the level statistics and spectral correlations <cit.>, it is natural to study the level statistics of the Floquet operator U(T). We focus on the nearest neighbor spacing distribution P(ε_s) between two adjacent ordered levels, ε_s = (ε_2-ε_1)/⟨ε_s ⟩. Level statistics have been used in random Floquet systems to understand the statistics of Floquet operators <cit.>, Floquet thermalization <cit.>, and disorder in driven topological phases <cit.>. In Fig. <ref>-(d) we depict P(ε_s) for σ=1 and c=1, well approximated by P(ε_s)∝ε_s e^-b ε_s^2, with b=0.12 and a linear level repulsion at small spacings. The form of P(ε_s) resembles the spacing distribution of a Gaussian orthogonal ensemble with b=π/4 <cit.>, and indicates that the considered ensemble constitutes a generic Hamiltonian family. The Floquet operator can be in a different random matrix class to the instantaneous Hamiltonian. Finally, no linear relationship can be established between E̅ and both the instantaneous and time-averaged Hamiltonian norm H̅_0= H(t=0)/c_0 and H̅=F_0/c_0, with c_0=3 1+∑_i σ_i a normalization constant. The two datasets are characterized by an almost vanishing correlation coefficient r=0.05 (see Appendix <ref> Fig. <ref> for the dependence of E̅ on either H̅_0 and H̅ and Eq. (<ref>) for the definition of r). §.§ Spherical Ensemble We now consider the model of Eq. (<ref>) with [ H_0+H_1(t)= F_0+(𝐆_1·σ) cos(ω_1 t +ϕ_1),; H_2(t)=( 𝐆_2·σ) cos(ω_2 t +ϕ_2),; F_0=𝐆_3 ·σ,; 𝐆_i = ρ( √(1-χ_i^2)cosθ_i ,√(1-χ_i^2)sinθ_i, χ_i ), ] where χ_i is chosen from a uniform distribution on the interval [-1,1] and θ_i on the interval [0,2π], providing Haar-measure sampling. Configurations obtained by an SU(2) transformation of vectors 𝐆_i, result in the same energy pumping efficiency. The resulting behavior is summarized in Fig. <ref> for different values of the distribution norm ρ, ω_1/ω_2=2/3, ω_2=1 and ϕ_1=0=ϕ_2. Quite surprisingly, there is a critical value ρ_c ≈ 1.5, above which P(E̅) changes from a symmetric triangular distribution with support at |E̅|< ξ: P(E̅)≈1/ξ^2(E̅-ξ) [ρ<1.5] to a distribution approximated by a Laplacian: P(E̅) ≈1/β e^-|E̅|/β [ρ>1.5]. . This behavior cannot be associated with an instantaneous Hamiltonian symmetry breaking. For ρ≪ 1 we find the scaling σ_0 ∝ρ^4, with σ_0 the standard deviation of the triangular distribution. Nearest neighbor spacing distribution P(ε_s) is approximated by a curve of the form P(ε_s)∝ε_s e^-b ε_s^2 with b=0.16 and a linear level repulsion at small spacings (see Fig. <ref> of Appendix <ref>). For all model realizations, it holds that C=0. §.§ Complex Gaussian Ensemble To complete the description, we now turn to generalized two-frequency models with complex parameters of the form [ H(t) = F_0 + F_1 e^iω_1 t + F_2 e^i ω_2 t +,; F_i = 1/2√(2)[ [ f^r_1+i f_1^i f^r_2+i f_2^i; f^r_3+i f_3^i f^r_4+i f_4^i ]]. ] H(t) supports various topological realizations, including the temporal BHZ model as a possible outcome. Parameters are chosen from P(f_j^r,i) = e^-(f_j^r,i-c)^2/2 σ^2. We examine the results of Fig. <ref>, where we plot P(E̅) for σ=1 and N=10^4 realizations, with ω_1/ω_2=2/3 as well, and ω_2=1. Once more, a transition is observed. Below =2 we observe a Laplacian distribution: P(E̅)≈1/βe^-E̅/β [<2 ] [see Fig.<ref>-a) for c=1], while above, we observe a Gaussian: P(E̅)∝ e^-E̅^2/2σ_0^2 [>2] [see Fig.<ref>-b) for c=3]. For a given σ=1, the standard deviation σ_0 of P(E̅) is an increasing function of c, while the normalized distribution P(E̅/F_i) is described by a standard deviation σ_n with a weak dependence on c, Fig.<ref>-c). For c=0 and σ≪ 1, both σ_0 and σ_n scale as σ_0 ∝σ^4, illustrated in Fig.<ref>-d). Finally, the nearest neighbor spacing distribution P(ε_s) shown in Fig. <ref>-e) exhibits quadratic level repulsion at small spacings, P(ε_s)∝ε_s^2 e^-b ε_s^2, and resembles the spacing distribution of a Gaussian unitary ensemble, for which it holds b=4/π <cit.>. Here we find b=0.165 and note that the transition at = 2 is not evident in P(ε_s). A particular realization of the model (<ref>) is the random temporal BHZ model H(t) =η_0 σ_z +η_1^x sin(ω_1 t +ϕ_1)σ_x - η_1^z cos(ω_1 t +ϕ_1) σ_z +η_2^x sin(ω_2 t +ϕ_2)σ_y - η_2^z cos(ω_2 t +ϕ_2) σ_z . Depending on the parameters, the considered model could belong in a topological dynamical class with | C| =1. For η^j_i=η, η_0= η m and | m | <2 (topological class), it is well established that the energy transfer is quantized in the near-adiabatic limit η≫ω_i <cit.>. Away from this limit, strong fluctuations are induced by the nonadiabatic driving conditions, for both rationally and irrationally-related frequencies, while the former exhibit more efficient pumping that exceeds the quantized rate (see Fig. <ref> of Appendix <ref>). In this regime, the topological properties become less important and the temporal BHZ model is one realization of an ensemble of many that belong to the same symmetry class, making the statistical description of the pumping effect necessary. Only in the strong-drive limit does the physics related to the topological class becomes dominant and quantized energy transfer is restored. § PARAMETER IMPORTANCE A Random-Floquet Hamiltonian approach can be utilized to investigate frequency conversion processes in a relatively large parameter space. Yet there are several questions that are difficult to settle, including the importance of Hamiltonian parameters in resulting a conversion process with high efficiency. To this end, we propose a feature extraction classification algorithm applied to the random temporal BHZ model of Eq. (<ref>), in order to recognize the relevance of topology in nonadiabatic pumps. Machine-learning approaches have been successfully applied in diverse fields including the identification of quantum phases <cit.>, ab initio solution of many-electron systems <cit.>, estimation of magnetic Hamiltonian parameters <cit.>, and others. We generate a dataset of N=10^5 elements for binary classification with eight uncorrelated features (η_i^j, ϕ_i, and C), depicted in Fig. <ref>. Input data with an efficiency below the decision boundary E̅_Q ≤E̅_d correspond to class low, while the rest are classified as high, with E̅_Q=E̅/E_Q being the normalized efficiency. Parameters η_i^j are sampled from a Gaussian distribution with c=1 and σ=1, and phases ϕ_i from a uniform in the interval [0,2π]. The statistical properties of the dataset, together with details on the classification model are given in the Appendix <ref>. Choosing E̅_d=0.5, 81% of all data are classified as low, while 60% are characterized by a vanishing Chern number C=0. Interestingly, 34% of data with low efficiency belong to the C=1 class, while it rises to 66% for the high class, indicating that topological models have a higher representation in the high class. As features and targets are interacting nonlinearly we employ a gradient-boosted trees model classifier, which exhibits the best performance in terms of commonly used metrics such as recall, precision, and accuracy <cit.>. A key step towards Hamiltonian engineering targeting high conversion efficiency is understanding the influence of individual features. The relative feature importance I reflects how often a feature is used in the split points of a decision tree. We identify I(η_0) =24.1%, I(η^z_2) =15.9%, I(η^z_1) =13.6%, I(η^x_1) ≈ 12.6%≈I(η^x_1), I(ϕ_i)=10.6%, and I(C) =0.01%. Our findings suggest that the uniform z component of the magnetic field is most valuable in achieving high conversion efficiency in nontopological pumps, while an instantaneous topological model is almost irrelevant. The relative feature importance remains unchanged for different decision boundaries E̅_d ∈ [0.4,0.6] and under the inclusion of a multi-class approach (e.g splitting the data into low, intermediate, and high efficiency). § ANALYTICAL BOUNDS ON THE PUMPING OF ARBITRARY STATES Before closing we consider the energy pumping in a doubly driven system which is not initialized in a Flouqet eigenstate. Non-adiabatic aspects of the energy pumping make the pumped power in an arbitrary superposition of Floquet states differ from the simple weighted sum of the pumping rate in each of the eigenstates. To study this, let's look at the work operator Ŵ_k(T)=iω_k∂/∂ϕ_klog U(T)=ω_k T∂/∂ϕ_k (∑_iε_i |i⟩⟨ i|) . A straightforward manipulation leads to the following expression Ŵ_k(T)=ω_k T(∑_i∂ε_i/∂ϕ_k|i⟩⟨ i| +∑_i≠ j(ε_i-ε_j) |j⟩⟨ i|∂_ϕ_k j⟩⟨ i|). with |∂_ϕ j⟩=∂/∂_ϕ |j⟩, and ε_i the Floquet quasi-energies. We can separate the work operator into two cases: 2-level systems and larger systems. For 2-level systems, we further assume that the Flqouet quasienergies appear in same-magnitude pairs, ε_1=-ε_2. By squaring the operator W_k(T) we obtain: W^2=1/2Tr W^2=(ω_k T)^2((∂ε_1/∂ϕ_k)^2+4ε_1^2 | ⟨ 1|∂_ϕ_k 2⟩|^2) On the other hand, for multilevel systems, we obtain a lower bound on the energy pumped: [ W^2≥1/NTr W^2=1/N(ω_k T)^2(∑_i=1^N(∂ε_i/∂ϕ_k)^2.; . +∑_i=1,j^N(ε_i-ε_j)^2 | ⟨ i|∂_ϕ_k j⟩|^2) ] Interestingly, the maximum work transferred in a multidrive system depends on the quantum metric based on changes in the relative phase of the drive. We intend to investigate the maximum work done in future work. § DISCUSSION To conclude, we investigated the Floquet statistics for an ensemble of doubly-driven random Hamiltonians in large parameter space, with an emphasis on the distribution of the energy pumping efficiency, by leveraging ideas from RMT. For nonadiabatic pumps, it holds P(E̅) = f(E̅/β) with β∝σ^4 and σ a scale parameter specific to particular Hamiltonian ensemble. Our main finding is that in the spherical ensemble, Sec. <ref>, as well as in the complex Gaussian ensemble, Sec. <ref>, there is a transition in the type of distribution that describes the pupming rate. This transition cannot be associated with a symmetry breaking of the instantaneous Hamiltonian. It occurs when the dirve angular frequency and amplitude are of about the same order. While this transition is not associated with the Chern number of an underlying 2d band structure, it is likely driven by Berry curvature effects which become significant at the same range. The scaling of energy pumping at small amplitudes requires some additional consideration before closing. Particularly we could ask whether any of our results are described by a Magnus expansion at the range of low-normed Hamiltonians (η≪ 1) <cit.>. We find that such an expansion is not sufficient to capture our numerical results. First, we calculate the low-driving scaling of the quantity E̅/E_Q∝η^α for specific model realizations, presented in detail in the Appendix <ref>. There is appears that for either a generic or a topological nonadiabatic pump, α does not exhibit universal properties. Also, as we show in Appendix <ref>, the leading powers for the cross-drive work obtained by a fourth-order Magnus expansion for a generic model are E̅∼η_0η_D^4, with η_0 the amplitude of the constant term in the Hamiltonian, and η_D the scale of the two drives. It thus becomes apparent that the considered regime of intermediate driving amplitudes for which the pumping efficiency becomes significant, 0 ≪η < ω_i, lies outside of the applicability of the perturbative approach. Our results can be directly applied in frequency conversion platforms based on single-qubit quantum devices with a deep level of control, making the implementation of various classes of quantum Hamiltonians possible <cit.>. The considered models offer a simple interpretation of energy and particle pumping of analogous protocols, including Thouless charge pumping in photonic <cit.>, ultracold fermions <cit.>, and single-spin <cit.> systems, quantum pumps in quantum dots <cit.>, and photon pumping in cavities <cit.>. Energy pumping between different drives of the same system is indeed a generic feature of multi-driven Floquet systems. Nonetheless, little is known about the distribution of such pumping. Numerically, we managed to characterize some swath of models. We expect, however, that this question could motivate a Floquet Random-Matrix type theory aimed at energy transfer and similar dynamical properties unique to driven systems. Future interesting directions for multi-parameter Floquet quantum Hamiltonians include the statistics of the quantum geometric tensor <cit.>, or more application-oriented approaches, using deep reinforcement learning algorithms aiming at optimizing frequency conversion processes <cit.>. § ACKNOWLEDGMENTS We thank Anushya Chandran and Michael Kolodrubetz for useful discussions. C.P. has received funding from the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No 839004. We are also grateful to the U.S. Department of Energy, Office of Science, Basic Energy Sciences under Award de-sc0019166. GR is also grateful to the NSF DMR grant number 1839271, as well as ARO MURI grant FA9550-22-1-0339 supported GR’s time commitment to the project in equal shares. NSF provided partial support to C.P. This work was performed in part at Aspen Center for Physics, which is supported by National Science Foundation grant PHY-1607611. G.R. is also grateful for support from the Simons Foundation and the Packard Foundation. § QUANTIZED FREQUENCY CONVERSION Our analysis begins by introducing the temporal analog of the chiral Bernevig-Hughes-Zhang (BHZ) model<cit.> H(t)=η m σ_z + H_1(ω_1 t +ϕ_1)+H_2(ω_2 t +ϕ_2) , with H_1(t)=η [sin(ω_1 t +ϕ_1) σ_x -cos(ω_1 t +ϕ_1) σ_z], and H_2(t)=η [sin(ω_2 t +ϕ_2) σ_y - cos(ω_2 t +ϕ_2)σ_z] the Hamiltonian of the two drives. Here the gap parameter m controls the topological | m |<2 and non-topological | m |>2 regime of the model. The Hall response translates to a quantized pumping of energy between the drives as, E_Q= ω_1 ω_2 C/2π , with C the Chern number of the band. Together with the adiabatic requirement η≫ω_i, a necessary condition is that ω_1 and ω_2 are rationally independent, ω_1/ω_2 ≡γ with γ∉ℚ, and the model is in its topological regime | m | <2<cit.>. In this respect, energy quantization emerges once the dynamics of the system effectively samples the whole Floquet zone and C takes an integer value. The energy pumping effect for a rational frequency ratio, ω_1/ω_2 ≡ q/p for p,q ∈ℤ, could exceed the quantized value E_Q in the entire topological region and can even be extended in the trivial regime <cit.>. In this case, only part of the Berry phase is sampled along a particular periodic path through the Floquet zone, which in turn depends on the choice of the offset phases ϕ_i. When the two frequencies are incommensurate, the energy transfer is maximized when the system is initialized in an eigenstate of ℋ_0(t=0), while in the opposite case of commensurate frequencies is preferable to consider a Floquet eigenstate initialization, |ψ_0 ⟩ =|ψ_F ⟩ with U_0(T)|ψ_F ⟩ =ε_F |ψ_F ⟩. Here U_0(T) is the Floquet single-period evolution operator <cit.>. In Fig. <ref> we depict the energy transfer in the adiabatic strong-drive regime η =10, gap parameter in the topological regime m=1, ω_1=1, and Δϕ = ϕ_1-ϕ_2=0 for both incommensurate ω_2/ω_1= (√(5)-1)/2 (upper panel) and commensurate ω_2/ω_1=2/3 (lower panel) frequencies. As expected, in both cases the energy pumping rate is E̅_i = E_Q. The nonadiabatic regime η≪ω_i, in which any bulk gap in the initial Hamiltonian is not maintained under time evolution, is not accessible analytically. We resort to a numerical calculation of E_i(t) of Eq. <ref> and subsequent estimation of the normalized conversion efficiency E̅_Q=E̅/E_Q, summarized in Fig. <ref>. We use ω_2/ω_1=2/3 (red line) or ω_2/ω_1=( √(5)-1)/2 (blue line) and gap parameter m=1.2. Strong fluctuations are induced by the nonadiabatic driving conditions, for both rationally and irrationally-related frequencies, while the former exhibit more efficient pumping that exceeds the quantized rate. In this regime, the topological properties become less important and the temporal BHZ model is one realization of an ensemble of many that belong to the same symmetry class, making the statistical description of the pumping effect necessary. Only in the strong-drive limit the physics related to the topological class becomes dominant and quantized energy transfer is restored. §.§ Low-Driving Expansion Here we demonstrate numerically that the low-driving scaling of the quantity E̅/E_Q is model dependent and does not exhibit universal properties. We consider both the temporal BHZ model Eq. (<ref>) as well as a generic Hamiltonian of the form H(t)/η= F_0 + F_1 cos(ω_1 t +ϕ_1) + F_2 cos(ω_2 t +ϕ_2) , where F_i = h_0^i1 + h_x^i σ_x + h_y^i σ_y+h_z^i σ_z. For all considered models we use ω_1/ω_2 = 2/3. We choose two different realizations of (<ref>) for which it holds E̅/E_Q > 1 at η =1, with E_Q= ω_1 ω_2/2π, and two realizations of the temporal BHZ model; m=1.2, Δϕ=0.4 and m=1, Δϕ=2.4. The overall picture suggested by Fig. <ref> is that for low-driving η≪ω_i, the scaling of the quantity E̅/E_Q ∝η ^α for either a generic or a topological nonadiabatic pump does not exhibit universal properties. A cursory further investigation into the scaling properties of the energy pumping of examples of model 10 reveals why there is no universal scaling. When the scaling of E̅ is explored with respect to the magnitude of F_0 and separately with respect to the magnitude of F_1 or F_2, it becomes clear that the scaling observed in Fig. <ref> reflects the highly nonlinear nature of frequency pumping. Using the very same representative realizations from Fig. <ref>, we find that a 4th-order Magnus expansion for the Floquet Hamiltonian yields an even higher scaling power. Within the manifold where ||F_0||=||F_1||=||F_2||, and H=η_0 F_1 +η_D [F_1 cos (ω_1 t)+F_2 cos(ω_2 t)],we find: E̅∼η_0 η_D^4 . In the limit of both η_0≪ 1 and η_D≪ 1. § FEATURE EXTRACTION CLASSIFICATION ALGORITHM In this Appendix, we present details of the classification algorithm employed to derive the importance of individual parameters in resulting in highly efficient energy pumping. We consider the following Random-BHZ Hamiltonian H(t) =η_0 σ_z +η_1^x sin(ω_1 t +ϕ_1)σ_x - η_1^z cos(ω_1 t +ϕ_1) σ_z +η_2^x sin(ω_2 t +ϕ_2)σ_y - η_2^z cos(ω_2 t +ϕ_2) σ_z , and calculate the energy conversion efficiency E̅ for commensurate frequencies ω_2/ω_1=3/2 and Floquet initialization. We generate a dataset of N = 10^5 elements, with parameters η_i^j sampled from a Gaussian distribution P(η_i^j) =e^-(η_i^j -c)^2/2 σ^2, with c=1 and σ=1, while phases ϕ_i are chosen from a uniform distribution in the interval [0,2π]. 60% of the generated data belong to the C=0 topological class. The energy pumping efficiency distribution decays exponentially P(|E̅|)∝ e^-|E̅|/β, with β=0.26 [see Fig. <ref>]. Maximum efficiency in the ensemble is found at E̅_=3.78, and the mean value at E̅_=0.275. We note that the linear correlation between efficiency E̅ and Hamiltonian parameters is weak, a result established by calculating the Pearson's correlation coefficient between two datasets x and y of length N, r_xy=1/σ_x σ_y∑_i=1^N(x_i-x̅)(y_i-y̅) , where x̅=∑_i=1^N x_i/N the sample mean and σ^2_x=∑_i=1^N(x_i-x̅)^2 the standard deviation. Fig. <ref> presents r between pumping efficiency E̅, initial time Hamiltonian norm H̅_0, driving amplitudes η_i^j, phases ϕ_i, and Chern number C. E̅ is only weakly associated with C and η_i^z, with r=0.3 and r=0.2 respectively, while r<0.05 for the remaining parameters. The main issue addressed here is to identify which of the Hamiltonian parameters are important in resulting in high-frequency conversion efficiency, a question treated as a classification problem. We proceed with constructing the model using eight uncorrelated features (amplitudes η_i^j, phases ϕ_i and Chern number C). Input data with an efficiency below the decision boundary E̅≤E̅_d correspond to class low, while the rest are classified as high. We introduce the normalized pumping efficiency E̅_Q=E̅/E_Q and use the quantized energy transfer E_Q as a reference. This binary classification is visually explained in Fig. <ref> with a decision boundary at E̅_d=0.5. 81% of all data belong to the low class (imbalanced data), and 40% of all data are characterized by a finite topological charge C=1. In the low class, 34% have C=1, while in the high class it rises to 66%, indicating that topological models have a higher representation in the high efficiency class. Mean value of initial time Hamiltonian norm is H̅_0^=1.25 (1.33) for low (high) class. We employ four machine learning classifiers, namely Random Forest, Logistic Regression, Support Vector Machines, and Gradient Boosted Trees, and assess their performance based on commonly used metrics such as P precision, R recall, and A accuracy <cit.>. Among them, the Gradient Boosted Trees model classifier can model non-linear interactions between the features and the target and has the highest performance with A=0.96, P=0.91 and R=0.88. The model is trained on 80% of all data and the rest are used as a test set for validation. Class imbalance is treated by adjusting the weight w assigned to each class to w=1 for the majority class (low) and w=3 for the minority class (high). The gradient-boosted trees model is a machine learning method that makes predictions by combining a sequence of weak decision tree classifiers based on a gradient-boosting predictive performance <cit.>. Once we construct our model, we extract the relative feature importance I, which reflects how often a feature is used in the split points of a decision tree, averaged over the tree ensemble. We find I(η_0) =24.1%, I(η^z_2) =15.9%, I(η^z_1) =13.6%, I(η^x_1) ≈ 12.6%≈I(η^x_1), I(ϕ_i)=10.6%, and I(C) =0.01%. Our findings suggest that the uniform z component of the magnetic field is most valuable in achieving high conversion efficiency and in this limit, the physics related to the topological pumping is less important. To complete the description we must also examine the effect of the choices made while constructing the binary classification problem. Since E̅ is a continuous variable, we are led to consider difference decision boundaries E̅_d ∈ [0.4,0.6], and also employ a multi-class approach where data are divided into three classes (low, intermediate, and high), visually explained in Fig. <ref>. In all cases, we arrive at models with similar performances and the same relative feature importance. § BERRY CURVATURE In this section, we explore geometric aspects of the energy pumping encoded in the Berry curvature of the quasienergy state for various Hamiltonian ensembles. The Berry curvature is defined as follows, B(θ_1, θ_2) = ⟨_θ_2Ψ(θ_1,θ_2) |_θ_1Ψ(θ_1,θ_2) ⟩ -⟨_θ_1Ψ(θ_1,θ_2) |_θ_2Ψ(θ_1,θ_2) ⟩ , where θ_i = ω_i t +ϕ_i is the phase angle of drive i and the quasienergy state Ψ(θ_1,θ_2) is an eigenstate of Hamiltonian H(θ_1,θ_2) given in Eq. (<ref>), and use ω_1/ω_2 = 2/3. In Figs. <ref>–<ref> we present B(θ_1, θ_2) plotted over the Floquet zone θ_i ∈ [0,2π) with Hamiltonians parameters chosen from i) a Gaussian ensemble with c=1 and σ=1 (see Fig.<ref>), ii) a spherical ensemble with ρ=1.5 (see Fig.<ref>) and for iii) the random temporal BHZ model of Eq. (<ref>) with parameters chosen from a Gaussian with c=1 and σ=1 (see Fig.<ref>). The first two models correspond to a linear polarization between the two drives with a vanishing Chern number given by C=1/(2 π) ∫_ dθ^2 B(θ_1,θ_2). For the random temporal BHZ model, the Chern number can take nonvanishing integer values (| C | =1) depending on the Hamiltonian parameters. We note that for commensurate frequencies the system does not sample over the whole Floquet zone, but rather explores a closed periodic path Γ depicted by dashed lines in Figs. <ref>–<ref> <cit.>. Within the adiabatic picture, one expects the pumping effect E̅ to be roughly the integral of the Berry curvature along the path Γ, E̅∝ C_Γω_1 ω_2, with C_Γ=1/(2 π) ∫_Γ dθ^2 B(θ_1,θ_2). From the results presented in Figs. <ref>–<ref> and explicit calculation of C_Γ we conclude that this naive approximation breaks down in the considered nonadiabatic regime and that further effects beyond the local geometrical characteristics of quasienergy states should be taken into account.
http://arxiv.org/abs/2307.01396v1
20230703231836
Precheck Sequence Based False Base Station Detection During Handover: A Physical Layer Based Security Scheme
[ "Xiangyu Li", "Kaiwen Zheng", "Sidong Guo", "Xiaoli Ma" ]
eess.SP
[ "eess.SP", "cs.NI" ]
Precheck Sequence Based False Base Station Detection During Handover: A Physical Layer Based Security Scheme Xiangyu Li12, Kaiwen Zheng1, Sidong Guo1, Xiaoli Ma1 1School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, USA 2Georgia Tech Shenzhen Institute, Tianjin University, Shenzhen, China e-mail: {xli985, kzheng71, sguo93, xiaoli}@gatech.edu ============================================================================================================================================================================================================================================================================================================ False Base Station (FBS) attack has been a severe security problem for the cellular network since 2G era. During handover, the user equipment (UE) periodically receives state information from surrounding base stations (BSs) and uploads it to the source BS. The source BS compares the uploaded signal power and shifts UE to another BS that can provide the strongest signal. An FBS can transmit signal with the proper power and attract UE to connect to it. In this paper, based on the 3GPP standard, a Precheck Sequence-based Detection (PSD) Scheme is proposed to secure the transition of legal base station (LBS) for UE. This scheme first analyzes the structure of received signals in blocks and symbols. Several additional symbols are added to the current signal sequence for verification. By designing a long table of symbol sequence, every UE which needs handover will be allocated a specific sequence from this table. The simulation results show that the performance of this PSD Scheme is better than that of any existing ones, even when a specific transmit power is designed for FBS. False base station, sequence verification, handover scheme, single-carrier transmission, physical layer security § INTRODUCTION Globally, the fifth Generation Mobile Communication System (5G) is providing greener networks <cit.> with increasingly high quality of service (QoS), mainly in terms of higher throughput, spectral efficiency and energy efficiency, and lower complexity in signal transmitting and processing <cit.>. However, due to backward compatibility, 5G still inherits many mechanisms from previous generations and this is where some security problems may arise. One of the most important mechanisms is the reselection of cell <cit.> or base station (BS) for user equipment (UE). The basic principle lies in the nature of UE connection - to find better QoS from surrounding environment. This mechanism is pervasive for devices around and can be of vital significance for the duration of connectivity. However, the weaknesses in this mechanism also create opportunities for potential attacks - the false base station (FBS) attack. FBS, also referred to as pseudo base station (PBS) or malicious base station (MBS), is an illegal BS which aims at attacking surrounding or targeted devices passively or actively over radio access networks (RANs). It has the ability to utilize the potential weaknesses in the network structure to force UE's connection with itself instead of the legal base station (LBS). Besides, it is difficult to predict when, where and how the threats from FBS will appear. § BACKGROUND §.§ Handover Mechanism Handover is a process in communications where a transition is made to shift the connection from the current cell to another cell without ending session. In order to secure this process, the 3rd Generation Partnership Project (3GPP) formulates a series of data, e.g. Measurement Report (MR), to help determine whether a transition should be executed. Three stages, Handover Preparation, Handover Execution and Handover Completion in Figure <ref> summarize how UE is shifted from source BS to target BS according to <cit.>. The 3GPP handover standard also includes Mobility Management Entity (MME) and Serving Gateway (SG) in the complete Technical Specification. As the main procedures of FBS attack occur before MME and SG get involved, we only focus on the interactions among UE, source BS and target BS. Unfortunately, the source of system information is never authenticated even in nowadays 5G networks, which is a weakness in the handover mechanism. The backward compatibility has not been properly dealt with yet, FBS may intercept the system information of a nearby LBS and replace it in a similar or higher signal power, so that one or more UEs will be illegally connected. In order to avoid such FBS attacks brought by authentication failure in network or link layer, extra verification schemes in physical layer need to be introduced for more accurate FBS detection. There are many types of attacks that FBS can initiate, including Impersonation, Intercept and Eavesdropping <cit.> etc. While all devices in the network, e.g. BS, Relay BS, or even the core network, are exposed to such attacks, the majority victims are mobile devices. FBS has the ability to force a UE's connection from an LBS to itself. Basically, this is realized by filching the broadcast messages of a selected LBS and increasing the transmit power to make UEs choose FBS. §.§ Related Work Some cryptographic detection schemes are introduced in 3GPP 5G Specification <cit.> to secure handover. However, these cryptography-based models may either increase the complexity or fail to defend against the updated attacks in real-time and multi-cast cases. The previous detection schemes fall into three categories <cit.> as follow: (1) UE-based detection scheme. Many previous detection schemes were in this category since they relied on the data at UE side. In <cit.>, a pragmatic radio frequency (RF) fingerprinting-based FBS detection approach was investigated and improvements were made in <cit.> to enable carrier frequency offset (CFO)-based schemes in time-critical scenarios. However, these schemes could be time-consuming and computationally-inefficient. (2) Crowd-Sourced scheme. With data collected from distributed UEs, the processing of data is done by the source BS or other data centers. A network of stationary measurement units and an application for mobile phones were studied in <cit.> to detect IMSI catcher. What in common was that both implementations required scanned data feedback to a central processing unit. A machine learning-based IMSI catcher detection system based on publicly available data was presented in <cit.>, which combined three detectors – Off-line-learning detector, Anomaly detector and Ensemble detector. (3) Network-based scheme. Information from both UEs and BSs is processed in the core network or cloud servers instead of the local BS. A cloud-server-based detection method was provided in <cit.>, which required BSs and UEs to transmit information to the cloud server. This uploading of information from both UEs and BSs increased uplink load, especially when there were no FBS attacks. Only a few researches focused on the security of handover process where FBS attacks may occur. A mathematical model was developed in <cit.> for FBS attack where several LBSs supported the network connection in the presence of vehicles. In <cit.>, the positions of LBSs and FBS random were made random and similar received signal strength (RSS) of a platoon of vehicles was generated. However, no detection schemes have been proposed, which has left room for future researches. Notation: Boldface, lower-case letters denote column vectors and boldface upper-case letters denote matrices. The superscript (·)^T represent the transpose. § SYSTEM MODEL §.§ Device Deployment We consider a system that is composed of a source BS, a target BS, an FBS and a UE, as shown in Figure <ref>. The source BS, i.e. LBS_1, and the target BS, i.e. LBS_2, are respectively located in the center of their own cell. UE is at the junction area of these two adjacent cells and FBS is situated randomly in an annular region from a distance to UE. Initially, while UE is connected to LBS_1, it also detects stronger signals coming from a neighboring LBS_2. FBS, which is closer to UE, wiretaps parameters of LBS_2 and disguises its own signal parameters as those from LBS_2. Because of this impersonation, UE detects two signals that originate from the same LBS_2 and it is the nature of UE to be connected via a stronger signal. In this way, UE may be connected to FBS. §.§ Serial Transmission Model We follow an improved single-carrier serial transmission model in <cit.>. The index n is used for symbol streams of possible different rates. Initially, a serial information symbols stream S(n) of goes through an error-control encoder, whose output is defined as U(n). If it is not encoded, U(n)=S(n). In each block, define the channel order as L_c, block size as N_b, and the total length of linear convolution is P = N_b + L_c. The sequence U(n) is grouped into these blocks U(i) = [ U(iN_b),U(iN_b+1),...,U(iN_b+N_b-1) ]^T. The i-th observed block can be listed as Z(i) = HU(i) + ζ(i) where each entry of the Toeplitz convolution matrix H is [ H]_p,n = H(p-n), and the AWGN is ζ(i) = [ ζ(iP),ζ(iP+1),...,ζ(iP+P-1) ]^T. The structure of this model is depicted in Figure <ref>. §.§ Assumptions for FBS Assumption 1 FBS initiates attacks by imitating a neighboring LBS, i.e. LBS_2 of the source BS, i.e. LBS_1. * When FBS is imitating LBS_2, it will focus only on LBS_2 and not wiretap or imitate transceiving messages of other possible LBS - LBS_1, LBS_3 etc. This is because FBS is relatively closer to LBS_2; otherwise, there will be too much workload for FBS. Assumption 2 In order to attack as many UEs as possible, FBS wiretaps UEs’ direct transceiving synchronization messages and UL allocation messages with LBS_2. * UEs’ messages sent from and to other LBSs, i.e. LBS_1, and possible LBS_3 etc. will not be wiretapped by FBS. * FBS begins sending UL allocation messages and time advances (TA) messages to a UE only after it learns that the UE is sending synchronization messages to the LBS_2. FBS does not send these messages after it has wiretapped LBS_2's complete UL allocation messages. * Due to broadcasting, UL allocation messages that FBS sends to a certain UE can be detected by other UEs only as MR messages from FBS, but will not be responded. To compare the performances of different detection schemes and for simplicity, we focus on the scenario where there is only one UE. The extension to multiple UEs is left as future work. § PROPOSED DETECTION SCHEME §.§ Fundamental Component Based on the handover in 3GPP Specification <cit.>, we design a table of sequence which consists of a series of symbols to verify the legitimacy of received signals. To ensure symbols are selected in a successive order, the table of sequence consists of two identical parts of symbols. The first half of sequence from Symbol 1 to Symbol N on the left is the same as the second half on the right, as shown in Figure <ref>. As shown in Figure <ref>, before LBS sends signals, a fixed number of symbols, referred to as selected symbols, are first chosen from the table of sequence. These symbols can be seen as precheck sequence ahead of regular information in signals. The beginning symbol can be chosen from the first half of sequence, and ends within the second half. By leveraging M-QAM modulation, every √(M) bits will be modulated and demodulated into each symbol. §.§ Adversary Attack Description FBS imitates the target BS by overhearing the contents that were and are being transmitted by the target BS, and replaying them. We assume FBS knows that some symbols are attached ahead of the regular information for signal verification. Not only the regular information such as cell ID, TA, UL Grant etc., but also the selected symbols will be wiretapped by FBS in order to generate similar or even almost accurate sequence as that sent from the target BS. However, reasonable time delay is considered by UE for security. UE can estimate the anticipated arrival time of the signal, based on the distance between itself and the target BS. After UE sends the synchronization message to the target BS, the target BS will respond to UE with periodic UL allocation messages. Thus, if FBS overhears the target BS before transmitting the exact symbols, UE can find the longer time delay of a illegal signal. In order to let UE suppose the signals from FBS are legal, as soon as it detects synchronization information sent from UE, FBS will choose continuous symbols from the known table of sequence and transmit them to UE together with the regular information. This will let the signal from FBS be received by UE during anticipated time. In this way, FBS may succeed disguising itself as the target BS without being discovered. §.§ Improved Handover Scheme with Sequence Verification In this scheme, these received symbols are added ahead of regular information. Every time it selects symbols, the target BS will have the selected sequence of symbols randomly begin with different symbols in the table. As shown in Figure <ref>, the selected symbols begin with Symbol n and end with Symbol 2. This will make it difficult for FBS to predict what the transmitted precheck sequence exactly is; however, the number of selected symbols is fixed and is known to FBS. The core of this detection scheme lies in the consistency of anticipated received symbols and actual received symbols. Key steps are depicted in Figure <ref>: (1) Before handover, the target BS shares the whole table of symbols so that it is known by any devices including FBS. (2) In Handover Request Ack transmitted from the target BS to the source BS, the beginning symbol index of selected symbols and the total number of selected symbols are included. (3) The source BS determines selected symbols and transmits them together with the regular information back to UE. The selected symbols transmitted by the source BS are considered as standard precheck sequence of symbols for later verification. (4) Upon receiving the signal from source BS, UE transmits synchronization information to target BS and waits for a response. (5) The target BS transmits selected symbols to UE with UL allocation or TA information. During verification, UE will check the received signal in the form of symbols and compare them with the previously-received sequence of symbols from the source BS. Based on the distance between UE and the target BS, UE can estimate the expected arrival time period of the received signal. On the contrary, if a signal arrives at UE much earlier or later than the expected time, UE can notice it and then mark it as an illegal signal. With verification of both received bits and arrival time, the source of signal can be more easily authenticated. For this Precheck Sequence-based Detection (PSD) Scheme comes a third assumption: Assumption 3 Considering generating cost and computing complexity when transmitting sequence from target BS to source BS, target BS will not generate a complete and new table of symbols for every UE. Instead, the general sequence table is unchanged, kept in storage and public to any devices. * Though FBS can know the whole table of sequence, it does not know what the beginning symbol is. * Every time the source BS decides to hand off UE to the target BS, the target BS only needs to inform the source BS the beginning symbol index before transmitting selected symbols and other messages. * When FBS imitates the target BS, it only overhears and focuses on the communication between the target BS and UEs. As FBS is closer to the target BS than other LBSs, it does not know about the transmission of other LBSs. § NUMERICAL RESULTS AND DISCUSSION The evaluation criteria of legal sequence of symbols involves bit error rate (BER). Especially for the case where two signals, one from the target BS and the other from FBS, arrive at nearly the same time, UE compares two sets of selected symbols with the aforementioned standard precheck sequence of symbols respectively. The signal with higher BER is regarded as illegal and coming from FBS. §.§ Results Analysis and Comparison Let the table of symbols generated by gray mapping, and modulated via 16-Quadrature Amplitude Modulation (16-QAM). We define the length of table of symbols as 32, the block size N=4 and the channel order L_ZP=2. Under 16-QAM, a total of 2 blocks, i.e. 8 symbols, are added ahead of regular transmitted information. Each simulation below uses MATLAB R2022a and is conducted for 10,000 realizations. Figure <ref> depicts the successful cheating rate (SCR) of the PSD Scheme under different sets of table length and sequence length. In a stable communication system with comparatively low BER, it can be inferred that the change of sequence length does not affect detection performance much when the table length stays the same. However, with fixed sequence length, the increase of table length leads to lower SCR thus better detection performance. The PSD Scheme is compared with representative schemes which have been proposed in <cit.>,<cit.> and <cit.> respectively. For the above references, since the most vulnerable occasions differ due to the transmit power of FBS, we make comparisons separately in the figures below. First, comparisons are made between the PSD Scheme and the scheme in <cit.>. The scheme in <cit.> assumes that FBS messages usually have the signal strength which is higher than a certain value, which is approximately three standard deviation above the average. As shown in Figure <ref>, if FBS appropriately calculates its transmit power and makes UE's RSS within the legal range, UE can still be easily attacked. However, with PSD Scheme, incorrect sets of sequence symbols can be directly judged as illegal. Therefore, the SCR under this scheme will not increase with the transmit power of FBS. Next, a distance threshold-based scheme has been proposed in <cit.>, which uses transmission model to calculate the distance of UE from the target BS and sets a distance threshold. Besides, a suspicious region has been designed in <cit.> with a small pre-set value similar to the false alarm rate in hypothesis testing theory. These two schemes cannot help with the vulnerable cases where RSS from FBS is similar to that from the target BS as shown in Figure <ref> and Figure <ref>. §.§ Potential Cost Discussion Although there have been considerations in the design of PSD Scheme not to largely affect the network efficiency, four main costs are inevitable in this PSD Scheme: (1) Memory space to store the table of sequence. Since the table of symbols is not temporary - selected symbols are generated for each handover, the table of symbols are known and stored in each LBS. Certain memory space will be used. (2) Overload sequence information in Handover Preparation stage. Between the source BS and UE, Handover Request Ack already includes a dedicated RACH preamble, access parameters, SIBs, etc. With long sequence of symbols to transmit, the regular transmit rate and efficiency can be impaired. (3) Overload sequence information in Handover Execution stage between the target BS and UE. In response to UE synchronization request, the downlink transmission from the target BS already includes UL allocation, UL grant and timing advance. Symbols as headers are extra transmission overhead. (4) Synchronization. The synchronization is realized by default. However, the PSD Scheme has higher requirements for synchronization than RSS-based schemes. It is sensitive to the inconsistency between desired symbols and received ones. Inaccurate synchronization may degrade detection effects. § CONCLUSION In this paper, a PSD Scheme has been proposed for the detection of FBS based on the received signal sequence. Under the adversary scenario, the proposed scheme, which involves the analysis of table of symbols and selected sequence of symbols for verification, can hugely increase the detection accuracy and efficiency. Moreover, with the increase of table length comes the better detection effects under fixed sequence length. Finally, this PSD scheme is compared with several representative FBS detection schemes and overwhelmingly better performances are shown. In future work, we will consider the effects of potential costs on the PSD Scheme. IEEEtran
http://arxiv.org/abs/2307.03205v1
20230706031706
Joint Computing Offloading and Resource Allocation for Classification Intelligent Tasks in MEC Systems
[ "Yuanpeng Zheng", "Tiankui Zhang", "Jonathan Loo", "Yapeng Wang", "Arumugam Nallanathan" ]
cs.NI
[ "cs.NI", "eess.SP" ]
Joint Computing Offloading and Resource Allocation for Classification Intelligent Tasks in MEC Systems Yuanpeng Zheng, Student Member, IEEE, Tiankui Zhang, Senior Member, IEEE, Jonathan Loo, Yapeng Wang, and Arumugam Nallanathan, Fellow, IEEE Part of this work has been presented at the IEEE Wireless Communications and Networking Conference (WCNC), Glasgow, Scotland, UK, Mar., 2023 <cit.>. This work is supported by Beijing Natural Science Foundation under Grants 4222010. (Corresponding author: Tiankui Zhang) Yuanpeng Zheng, Tiankui Zhang are with the School of Information and Communication Engineering, Beijing University of Posts and Telecommunications, Beijing 100876, China (e-mail: {zhengyuanpeng, zhangtiankui}@bupt.edu.cn). Jonathan Loo is with the School of Computing and Engineering, University of West London, London W5 5RF, U.K. (e-mail: [email protected]). Yapeng Wang is with Faculty of Applied Sciences, Macao Polytechnic University, Macao SAR, China (e-mail: [email protected]). Arumugam Nallanathan is with the School of Electronic Engineering and Computer Science, Queen Mary University of London, London E1 4NS, U.K. (e-mail: [email protected]). August 1, 2023 =========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Mobile edge computing (MEC) enables low-latency and high-bandwidth applications by bringing computation and data storage closer to end-users. Intelligent computing is an important application of MEC, where computing resources are used to solve intelligent task-related problems based on task requirements. However, efficiently offloading computing and allocating resources for intelligent tasks in MEC systems is a challenging problem due to complex interactions between task requirements and MEC resources. To address this challenge, we investigate joint computing offloading and resource allocation for intelligent tasks in MEC systems. Our goal is to optimize system utility by jointly considering computing accuracy and task delay to achieve maximum system performance. We focus on classification intelligence tasks and formulate an optimization problem that considers both the accuracy requirements of tasks and the parallel computing capabilities of MEC systems. To solve the optimization problem, we decompose it into three subproblems: subcarrier allocation, computing capacity allocation, and compression offloading. We use convex optimization and successive convex approximation to derive closed-form expressions for the subcarrier allocation, offloading decisions, computing capacity, and compressed ratio. Based on our solutions, we design an efficient computing offloading and resource allocation algorithm for intelligent tasks in MEC systems. Our simulation results demonstrate that our proposed algorithm significantly improves the performance of intelligent tasks in MEC systems and achieves a flexible trade-off between system revenue and cost considering intelligent tasks compared with the benchmarks. Computing offloading, intelligent tasks, mobile edge computing, resource allocation. § INTRODUCTION With the rapid development of mobile edge computing (MEC), which supports not only communication but also computing and storage, more and more new applications such as computer vision, natural language processing, semantic communication, etc., are emerging constantly. By being closer to users of network than traditional cloud computing, MEC can obviously reduce application completion time and improve the quality of user experience with specific tasks. In the context of the increasing number of intelligent computing scenarios for intelligent tasks, MEC needs to tackle with many related problems with specific characteristics <cit.> which is diverse and different from traditional resource allocation problems. However, few existing works consider the various requirements of those characteristics such as computing accuracy <cit.> and parallel computing <cit.> while considering resource allocation problems in MEC. Obviously, how to efficiently allocate diversified resource to support the specific demands of intelligent tasks is still an unaddressed problem. Hence, in the context of massive Internet of Things (IoT) devices deployment and limited terminal computing capacity, characteristics of computing tasks are increasingly complex and their impact escalates in MEC. Existing works on computing offloading and resource allocation in MEC systems has become specific and multidimensional<cit.>. C. Wang et al.<cit.> considered computation offloading and content caching strategies in wireless cellular network with MEC and formulate the total revenue of the network. With considering computing offloading and large data volume, Y. Ding et al.<cit.> propose a novel online edge learning offloading scheme for UAV-assisted MEC secure communications, which can improve the secure computation performance. Considering edge cache-enabling UAV-assisted cellular networks, T. Zhang et al.<cit.> formulated a joint optimization problem of UAV deployment, caching placement and user association for maximizing quality of experience of users, which is evaluated by mean opinion score. J. Feng et al.<cit.> considered the stochastic nature of tasks and proposed a framework that maximizes revenue of network providers in MEC systems, where a multi-time scale scheme was adopted to increase revenue on the basis of QoS guarantee. As an important scene, J. Y. Hwang et al.<cit.> introduced the IoT platform with an efficient method of integrating MEC and network slice to maximize the effect of decreasing delay and traffic prioritization. W. Lu et al.<cit.> proposed two secure transmission methods for multi-UAV-assisted mobile edge computing based on the single-agent and multi-agent reinforcement learning respectively and achieve larger system utility. T. Zhang et al.<cit.> proposed a new optimization problem formulation that aims to minimize the total energy consumption including communication-related energy, computation-related energy and UAV's flight energy by optimizing the bits allocation, time slot scheduling, and power allocation as well as UAV trajectory design with multiple computation strategies. As shown above, these works studied characteristics of tasks in MEC systems under the assumption of simplified computing process. Nevertheless, in actual application scenarios, intelligent tasks have complicated characteristics and demands including complexity, accuracy and parallelism that will cause many changes, which need to be considered in the computing offloading and resource allocation in MEC systems. With the increasing trend of artificial intelligence in recent years, as a form of computing that solves practical problems by optimizing existing computing methods and resources systematically and holistically according to task requirements<cit.>, intelligent computing has brought more requirements to the MEC field. The researches on intelligent computing for intelligent tasks in MEC are getting more attention<cit.>. H. Xie et al.<cit.> investigated the deployment of semantic communication system based on edge and IoT devices where MEC servers computing the semantic model and IoT devices collect and transmit data based on semantic task model. X. Ran et al.<cit.> focused on a framework that ties together front-end devices with more powerful backend helpers to allow intelligent tasks to be executed locally or remotely in the edge with considering the complex interaction between computing accuracy, video quality, battery constraints, network data usage and network conditions. By considering the computer vision to video on cloud-backed mobile devices using intelligent tasks, S. Han et al.<cit.> studied the resource management including strain device battery, wireless data and cloud cost budgets in MEC systems. M. Jankowski et al.<cit.> introduced the intelligent tasks of image retrieval problem at the wireless edge which maximizes the accuracy of the retrieval task under power and bandwidth constraints over the wireless link. X. Xu et al.<cit.> indicated there are increasing gaps between the computational complexity and energy efficiency required by data scientists and the hardware capacity made available by hardware architects while processing intelligent tasks in edge and discussed various methods that help to bridge the gaps. Apparently, the research of intelligent computing for specific intelligent tasks in MEC has become extensive and gradually mature, which represents resource allocation for intelligent tasks in MEC has a certain basis. In this context, some studies considering the complicated characteristics of specific intelligent tasks above become important and lay the foundation for resource allocation in MEC<cit.>. B. Gu et al.<cit.> investigated the fitting of modelling classification accuracy by verification of large data sets for intelligent tasks and find that power law is the best among all models. As the computing tasks become complicated gradually, D. Bruneo et al.<cit.> studied the requirements of computing infrastructures and introduced the parallel computing model for complex computing tasks with the corresponding quality of service. Considering the scenario of intelligent tasks, H. Xie et al.<cit.> proposed a brand new framework of semantic communication where a deep learning based system for text transmission combined with natural language processing and semantic layer communication was constructed. Analogously, M. Bianchini et al.<cit.> and D. Justuset al.<cit.> indicated that the prediction of execution time of intelligent tasks depends on many influence factors including the construction of tasks and hardware features. In particular, W. Fan et al.<cit.> consider a quality-aware edge-assisted machine learning task inference scenario and propose a resource allocation scheme to minimize the total task processing delay considering the inference accuracy requirements of all the tasks. Obviously, in the face of specific intelligent tasks such as DNN training and inference, MEC can take advantage of control and optimization to perform more tasks at low cost while considering intelligent computing scenarios. §.§ Motivation and Contribution As mentioned above, the combination of computing offloading and resource allocation in MEC systems considering the demands of intelligent tasks is still an unaddressed research area, which motivates this contribution. In this paper, considering the intelligent computing for intelligent tasks, we study specific classification intelligence tasks and adopt the training accuracy fitting model<cit.> and parallel computing model of hardware servers<cit.> as key influencing factors. Our scenario is based on classification intelligent tasks in <cit.> considering lightweight distributed machine learning training and parallel computing in MEC systems combined with control and optimization through communication. We model the key indicator, i.e., computing accuracy into our resource allocation algorithm and make the trade-off with task delay, which improves the applicability of our model in intelligent task scenarios. Though we focus on specific intelligent tasks due to the differences between tasks, our method is also proper for other similar applications. The main contributions of this paper are as follows: * We formulate an optimization problem that considers both the accuracy requirements of tasks and the parallel computing capabilities of MEC systems. We adopt the training accuracy fitting model of classification intelligence tasks and parallel computing model of hardware servers to represent the precise demands while considering computing offloading and resource allocation. We define the system utility which consists of the system revenue depending on computing accuracy and cost depending on task delay. Therefore, an integrated framework for computing offloading and resource allocation for intelligent tasks has the potential to improve the intelligent computing performance of the MEC systems. * We solve the highly coupled computing offloading and resource allocation problem by decoupling manifold optimization variables through the idea of iterative optimization into three subproblems: subcarrier allocation, computing capacity allocation and compression offloading. We design an efficient computing offloading and resource allocation algorithm for intelligent tasks in MEC systems where the subcarrier allocation problem and compression offloading problem are solved by successive optimization approximation and the computing capacity allocation problem is solved by convex optimization. We derive closed-form expressions for all variables and acquire the suboptimal solution through iterative optimization. * We demonstrate the simulation results which verify that our framework is applicable to intelligent computing scenario in MEC systems. It is shown that the proposed algorithm significantly improves the performance of intelligent tasks in MEC systems and achieves a flexible trade-off between system revenue and cost considering intelligent tasks compared with the benchmarks. §.§ Organization The rest of this paper is organized as follows. In Section II, the system model and utility function are introduced. In Section III, the problem formulation and decomposition into several subproblems of the problem are represented to design our algorithm. The performance of the proposed algorithm is evaluated by the simulation in Section IV, which is followed by the conclusions in Section V. § SYSTEM MODEL We consider the heterogeneous cellular scenario as shown in Fig. 1(a) and equip MEC servers on small base stations (SBS) to form the MEC systems. We set the total amount of users is U. The set of MEC systems is denoted by K^S = {1,...,k,...,K} and it is assumed that SBS k is associated with U_k mobile users. We let U^S_k = {1,...,u_k,...,U_k} denote the set of users associating with SBS k where u_k refers to the uth user which associates with the kth SBS. The set of classification intelligent tasks is denoted by M^S = {1,...,m,...,M} which has the property of parallel computing, i.e., task m requires parallel processing on the server, and the requirement of accuracy. In our model, we consider two types of computing including local computing and offloading to MEC computing in our systems as shown in Fig. 1(b). We adopt few-shot learning and data compression in step 1 in Fig. 1(b) which is applied to size compression in traditional classification intelligent tasks and feature extraction in semantic tasks for training. For example, the image set collected by the local device needs to be classified and recognized, therefore local intelligent device will choose local computing or offloading to edge computing where image set needs size compression or semantic feature extraction in our model. Let the bandwidth resource of our system be B, computing capacity of local device be F^L_u_k, computing capacity of the MEC server be F_k and delay limit for computing task m be t̃_m. §.§ Communication Model In our system, every SBS in the network is equipped with the MEC server, so each user can offload its computing task to the MEC server through the SBS to which it is connected. We denote x_u_k∈{0,1}, ∀ u,k as the computing offloading indicator variable of user u_k. Specially, x_u_k = 1 if user u_k offload its computing task to the MEC server via wireless network and we have x_u_k = 0 if user u_k determine to compute its task locally on the mobile device. Therefore, we denote x = {x_u_k}_u_k ∈ U^S_k, k ∈ K^S as the offloading indicator vector. In this paper, we consider that spectrum used by SBSs is overlaid and spectrum within one SBS is orthogonally assigned to every user. Therefore, there exists interference between SBSs but there will be no interference within one SBS. We only analyse uplink transmission and divide the total spectrum into N subcarriers, which is denoted as N^S = {1,...,n,...,N}. All users can reuse subcarriers for uplink transmission to improve spectrum utilization. We denote ρ_u_kn∈{0,1},∀ u,k,n as subcarrier allocation variables, where ρ_u_kn = 1 means subcarrier n is allocated to user u_k which is associated with SBS k and ρ_u_kn = 0 otherwise. Obviously, one subcarrier on an SBS can only be allocated to one user at a time, therefore we have ∑^U_k_u_k=1ρ_u_kn≤ 1,∀ n. We denote ρ = {ρ_u_kn}_u_k∈ U^S_k, k∈ K^S,n∈ N^S as the subcarrier allocation vector. Then, the uplink transmission rate of user u_k on subcarrier n given as r_u_kn = B/Nlog_2( 1 + P_u_kng_u_kn/I_u_kn + σ^2), ∀ u,k,n, where P_u_kn represents transmit power from user u_k to SBS k, g_u_kn represents wireless channel gain between user u_k and SBS k on subcarrier n, and I_u_kn represents co-channel interference of users associating with other SBSs on the same frequency of user u_k, which is given as I_u_kn = ∑_c∈ K^S,c≠ k∑^U_c_u'_c=1ρ_u'_cng_u'_cnP_u'_cn,∀ u,k,n. σ^2 denotes the noise power of additive white Gaussian noise. Obviously, the uplink transmission rate of user u_k is denoted as r_u_k = ∑^N_n=1ρ_u_knr_u_kn,∀ u,k, which means the total transmission rate of user u_k is the summation of the transmission rate on all subcarriers which are allocated to user u_k. §.§ Computing Model For the computing model, we consider each user u_k has a computing task m, and denote z_u_km∈{0,1}, ∀ u,k,m as the indicator variable of the computing task m of user u_k. Specially, z_u_km = 1 if the computing task of user u_k is m, otherwise z_u_km = 0. In our model, we assume that z_u_km is already given as the user request and we have ∑^M_m=1 z_u_km = 1,∀ u,k. Obviously we consider that one task can be requested by each user at a time, which make our model clearer for task properties of accuracy requirement and parallel computing. The scenario of multi-task case can be acquired after small modified base on our model. We consider two types of computing approaches, i.e., local computing and task offloading. 1) Local Computing: For the local computing approach, the raw data of user u_k is given as a_u_k, and we can acquire the computing delay through the raw data a_u_k directly, which is given as T^L_u_k = ∑ ^M_m=1z_u_kmF_u_km(a_u_k)/F^L_u_k, ∀ u,k, where F_u_km(· ) represent the computing resource overhead of corresponding data volume after parallel computing. In this paper we think of it as approximate linear relationship after the property of parallel computing of tasks has been considered. Note that we neglect power consumption of computing in our model, therefore we do not consider extra computing cost in local device. 2) Task Offloading: For the task offloading approach, user u_k will compress the raw data a_u_k to b_u_k = a_u_k/ε _u_k,∀ u,k, where ε _u_k is denoted as compression ratio of user u_k and ε _u_k≥ 1, ∀ u,k. We denote ε = {ε_u_k}_u_k∈ U^S_k, k∈ K^S as the compression ration vector. Apparently, we have α_u_k = (1-x_u_k)a_u_k + x_u_kb_u_k. Then, the compressed data b_u_k is transmitted to SBS k to process and compute, and the transmission delay of the compressed data from user u_k in wireless link is given as t^comm_u_k = b_u_k/r_u_k,∀ u,k. Let f^O_u_k be computing capacity allocated to user u_k from SBS k and f^O = {f^O_u_k}_u_k∈ U^S_k, k∈ K^S be the computing capacity allocation vector, so that the computing delay of user u_k processing its computing task on SBS k is denoted as T^O_u_k = ∑ ^M_m=1z_u_kmF_u_km(b_u_k)/f^O_u_k,∀ u,k. In our model, we need to consider the influence of multi-task parallel computing of MEC severs. According to the virtual machine multiplexing technology, we adopt the influence of multi-task parallel computing on delay of computing of computing task m from <cit.>, which is given as T_u_km = T^O_u_k(1+d)^i_m-1,∀ u,k,m, where i_m represents the amount of parallel computing requested when classification intelligent task m is processed, and d(≥ 0) is degradation factor which means the percentage increase in the expected computing delay experienced by a virtual machine when multiplexed with another virtual machine. Therefore, d is used to represent the impact of multi-task parallel computing on MEC servers. We assume that a maximum of Q parallel numbers are allowed to be processed simultaneously on the MEC server, i.e., i_m ≤ Q. Therefore, the delay of computing of user u_k on SBS k is denoted as t^comp_u_k = ∑^M_m=1 z_u_kmT_u_km, ∀ u,k. Similar to the study in <cit.>, we notice the fact that the downlink data volume of computing outcome is much smaller than uplink data volume, so we neglect the downlink transmission in this work. Due to the character of local computing, we do not consider the influence of multi-task parallel computing on local device. §.§ Utility Function As shown from above introduction, we consider joint allocation of communication resource and computing capacity with compression and parallel computing. In this paper, we focus on maximum the system utility under computing accuracy constraint and task delay constraint. For each user u_k, we consider marginal utility of the combination of system revenue, i.e., computing accuracy and system cost, i.e., task delay. To design intelligent computing and introduce the feature of classification intelligence tasks in our model, we adopt the 3-parameters power law fitting formula between the data volume and the computing accuracy from <cit.>. Note that the accuracy fitting formula of classification intelligence tasks is used to represent the training process which is different from the inference accuracy in <cit.>. Obviously, we consider the scenario of distributed few-shot learning and design the resource allocation in this paper, and the inference accuracy will be considered in our future work. For the convenience of subsequent modelling, we adopt the simplified form which is given as y(α_u_k) = p - q α_u_k^-r,∀ u,k, where α_u_k represents the data volume to be computed of user u_k and p,q,r are all fitting paraments. In this work, the limit of computing accuracy of computing task m is set as ỹ_m. Therefore, the computing accuracy of user u_k in our model is denoted as y(α_u_k) = p - q ((1-x_u_k)a_u_k + x_u_kb_u_k)^-r. In scenario of classification intelligence tasks, the raw data a_u_k is training samples for several mode and we consider the raw data as data volume for resource allocation. Note that we adopt b_u_k as the data volume for accuracy calculation after compression and transmission. This mode is used for the lite distributed training system that need to make trade-off between delay and accuracy. The total task delay of user u_k in our model is t_u_k = (1-x_u_k)T^L_u_k + x_u_k(t^comm_u_k+t^comp_u_k), ∀ u,k. Note that we consider the two different properties, i.e., task delay and computing accuracy, of classification intelligent tasks which is different from traditional computing tasks. In the system, these two properties are the primary concern in practice <cit.>. To moderate the trend of accuracy and delay and balance the complexity of algorithm, we model the system utility with convex and nondecreasing function. Here the logarithmic function of diminishing marginal utility with trade-off between system revenue and cost which has been used frequently in literature<cit.>, is adopted as utility function. Therefore the utility of user u_k is denoted as R_u_k = ln( L y(α_u_k)/t_u_k), ∀ u,k, where L is denoted as the weight parameter between system revenue and cost. We adopt the form of division and weight parameter in logarithmic function to control the order of magnitude of delay because the change in value of accuracy is small. Obviously, the equation (<ref>) represents the marginal utility of users while being processed in this system considering trade-off between accuracy and delay. Therefore, the system utility is given as R = ∑_k∈ K^S∑_u_k∈ U^S_k R_u_k, which is the system optimization objective considered in this paper. § PROBLEM FORMULATION AND ALGORITHM DESIGN In order to maximize the system utility, we formulate it as an optimization problem and decompose it into several convex optimization subproblems via successive convex approximation (SCA). Then we design the corresponding iterative algorithm to solve the optimization problem. §.§ Problem Formulation and Decomposition Solution We adopt the system utility proposed in (<ref>) as the objective function of our optimization problem, and we formulate it as max_x,ρ,f^O,ε R s.t. (C1): x_u_k∈{0,1}, ∀ u,k, (C2): ρ_u_kn∈{0,1}, ∀ u,k,n, (C3):∑^K_k=1 x_u_k≤ 1, ∀ u, (C4):∑^U_k_u_k=1ρ_u_kn≤ 1, ∀ n, (C5):ε_u_k≥ 1, ∀ u,k, (C6):t_u_k≤∑^M_m=1 z_u_kmt̃_m, ∀ u,k, (C7):y(α_u_k)≥∑^M_m=1 z_u_kmỹ_m, ∀ u,k, (C8):∑^U_k_u_k =1f^O_u_k≤ F_k, ∀ u,k. In (<ref>), the constraints (C1) and (C2) guarantees that the value of two indicator variables is restrict to 0 and 1, constraints (C3), (C4) and (C5) mean one user can only choose one type of computing approach, one subcarrier n on one SBS k can only be allocated to one user u_k at a time and the compressed data is less than or equal to the raw data, constraints (C6) and (C7) are proposed to ensure the limits of the task delay and the computing accuracy are hold, constraint (C8) guarantees that the sum of allocated computing capacity is not greater than total computing capacity of MEC servers. It is observed that constraints (C5) and (C7) is the external to the system which is chosen by users, which means compression ratio of terminals affects the resource allocation of system to large extent and the computing accuracy constraint can change the traditional resource allocation which will be displayed by the benchmark in simulation below. Obviously, the dimensions of variable ρ is the most and will pull the computing complexity of the algorithm below deeply. Apparently, (<ref>) is a non-linear mixed integer programming and non-convex optimization problem, and such problems are usually considered as NP-hard problems. Therefore, we need to decompose it into several subproblems and make some transformation and simplification to solve it iteratively. For convenience of solving (<ref>), we decompose it into three subproblems by giving other variables. 1) Subcarrier Allocation Subproblem: In this subproblem, other variables are give in (<ref>) except subcarrier allocation variable. In order to facilitate the analysis and solution of the problem, the method of binary variable relaxation <cit.> is adopted, variable ρ is relaxed into real value variable as ρ_u_kn∈ [0,1]. Then we can acquire subcarrier allocation subproblem which is given as max_ρ∑_k∈ K^S∑_u_k∈ U^S_kln( L A^δ_u_k/A^β _u_k+x_u_k( b_u_k/r_u_k + A^γ_u_K) ) s.t. (C2), (C4), (C6'): A^β _u_k+x_u_k( b_u_k/r_u_k + A^γ_u_K)≤∑^M_m=1 z_u_kmt̃_m, ∀ u,k, where A^δ_u_k = p - q ((1-x_u_k)a_u_k + x_u_kb_u_k)^-r, A^β _u_k = (1-x_u_k)T^L_u_k and A^γ_u_K = t^comp_u_k and in this way, (C6) in (<ref>) is converted to (C6') here. These expressions do not vary with ρ, therefore we use constant expressions to replace them for convenience of solving (<ref>). r_u_k is expanded out to r_u_k = ∑^N_n=1ρ_u_knB/N· log_2( 1 + P_u_kng_u_kn/∑_c∈ K^S,c≠ k∑^U_c_u'_c=1ρ_u'_cng_u'_cnP_u'_cn + σ^2). Obviously, the relationship between ρ and the objective function of (<ref>) is pretty complicated. For convenience of expressing the solution of this problem, we remove constant terms and replace the interference item with a single variable, and get the optimization problem which is denoted as min_ρ,I∑_k∈ K^S∑_u_k∈ U^S_kln( x_u_kb_u_k/B/N∑ ^N_n=1 log_2( 1 + P_u_kng_u_kn/I_u_kn + σ^2)) s.t. (C2), (C4), (C6'): x_u_kb_u_k/B/N∑ ^N_n=1 log_2( 1 + P_u_kng_u_kn/I_u_kn + σ^2)≤ ∑^M_m=1 z_u_kmt̃_m - A^β _u_k - x_u_kA^γ_u_K, ∀ u,k, (C9): I_u_kn = ∑_c∈ K^S,c≠ k∑^U_c_u'_c=1ρ_u'_cng_u'_cnP_u'_cn,∀ u,k,n, where I = { I_u_kn}_u_k∈ U^S_k, k∈ K^S, n∈ N^S represent the interference vector and become constraint (C9) of the above problem. Apparently, (<ref>) is non-convex problem, therefore we adopt the method of SCA to transform it and relaxation variables are introduced as follow S_u_kn≤ρ_u_knl_u_kn, l_u_kn≤ log_2 ( 1 + P_u_kng_u_kn/I_u_kn + σ^2), which will be constraints (C7') and (C8') of the problem. Then the original optimization problem (<ref>) is converted to min_ρ,I,S,l∑_k∈ K^S∑_u_k∈ U^S_kln( x_u_kb_u_k/B/N∑ ^N_n=1S_u_kn) s.t. (C2), (C4),(C6'), (C7'): S_u_kn≤ρ_u_knl_u_kn, (C8'): l_u_kn≤ log_2 ( 1 + P_u_kng_u_kn/I_u_kn + σ^2), (C9): I_u_kn = ∑_c∈ K^S,c≠ k∑^U_c_u'_c=1ρ_u'_cng_u'_cnP_u'_cn,∀ u,k,n. It shows that the right side of (C7') and (C8') require SCA to convert them to convex, and others constraints are all convex. For (C7'), we perform first order Taylor expansion on the right side at point (ρ^i_u_kn,l^i_u_kn) and convert it to S_u_kn≤ ρ^i_u_kn + l^i_u_kn/2 (ρ_u_kn+l_u_kn) - (ρ^i_u_kn + l^i_u_kn)^2/4 - (ρ^i_u_kn - l^i_u_kn)^2/4,∀ u,k,n. For (C8'), we perform first order Taylor expansion on the right side at point I^i_u_kn and convert it to l_u_kn≤ log_2(P_u_kng_u_kn + I_u_kn + σ^2 ) - ( ln(I^i_u_kn + σ^2)+I_u_kn-I^i_u_kn/I^i_u_kn+σ^2)/ln2, ∀ u,k,n. In this way (C7') and (C8') are converted to convex constraints and we can use convex optimization method for SCA iteration <cit.> to solve (<ref>), which is the same way to solve (<ref>). 2) Computing Capacity Allocation Subproblem: Under given other variables except f^O, (<ref>) is simplified to max_f^O∑_k∈ K^S∑_u_k∈ U^S_kln(LA^δ _u_k) - ln (B^β _u_k+t^comp_u_k) s.t. (C6”): B^β _u_k+t^comp_u_k≤∑^M_m=1 z_u_kmt̃_m,∀ u,k, (C8), where the constant term B^β _u_k = (1-x_u_k)T^L_u_k + x_u_kt^comm_u_k, and t^comp_u_k = ∑^M_m=1 z_u_km (1+d)^i_m-1· ∑^M_m=1z_u_kmF_u_km(b_u_k)/f^O_u_k and in this way, (C6) in (<ref>) is converted to (C6”) here. Therefore, (<ref>) is a convex optimization problem and can be solved directly by convex optimization method <cit.>. 3) Compression Offloading Subproblem: We need to solve computing offloading indicator variable x and compression ratio variable ε under given ρ and f^O. For convenience of solving, we adopt binary variable relaxation and relax x into real variable as x_u_k∈{ 0,1 }. The original problem (<ref>) is simplified to max_x,ε∑_k∈ K^S∑_u_k∈ U^S_k ln( L y(α_u_k)/(1-x_u_k)C^δ_u_k+x_u_k/ε_u_kC^β_u_k) s.t. (C1), (C3),(C5), (C6”'): (1-x_u_k)C^δ_u_k+x_u_k/ε_u_kC^β_u_k≤∑^M_m=1 z_u_kmt̃_m, ∀ u,k, (C7), where the constant terms C^δ_u_k = T^L_u_k and C^β_u_k = a_u_k/r_u_k + ∑^M_m=1z_u_km(1+d)^i_m-1∑^M_m=1z_u_kmF_u_km(a_u_k)/f^O_u_k, and y(α_u_k) = p - q ((1-x_u_k)a_u_k + x_u_k/ε_u_ka_u_k)^-r and in this way, (C6) in (<ref>) is converted to (C6”') here. Normally, p,q and r satisfy that p>0, q>0 and 0≤ r ≤ 1. We adopt the method of variable substitution and let η_u_k = 1-x_u_k+x_u_k/ε_u_k. Obviously, η_u_k satisfies that 1-x_u_k≤η_u_k≤1 which will be constraint (C5') of the above problem and we can transform problem (<ref>) into max_x,η∑_k∈ K^S∑_u_k∈ U^S_k ln( L (p-q*(a_u_kη_u_k)^-r)/(1-x_u_k)(C^δ_u_k-C^β_u_k)+C^β_u_kη_u_k) s.t. (C1),(C3), (C5'): 1-x_u_k≤η_u_k≤ 1, ∀ u,k, (C6”'): (1-x_u_k)(C^δ_u_k-C^β_u_k)+C^β_u_kη_u_k≤ ∑^M_m=1 z_u_kmt̃_m, ∀ u,k, (C7): p-q*(a_u_kη_u_k)^-r≥∑^M_m=1 z_u_kmỹ_m,∀ u,k. Due to non-convexity of (<ref>), we adopt the method of SCA and let v_u_k≥ ln( (1-x_u_k)(C^δ_u_k-C^β_u_k) + C^β_u_kη_u_k). We perform first order Taylor expansion on the right side at point (x^j_u_k,η^j_u_k) and convert it to v_u_k≥ ln( (1-x^j_u_k)(C^δ_u_k-C^β_u_k) + C^β_u_kη^j_u_k)+ (C^β_u_k-C^δ_u_k)(x_u_k-x^j_u_k) + C^β_u_k(η_u_k - η^j_u_k)/(1-x^j_u_k)(C^δ_u_k-C^β_u_k) + C^β_u_kη^j_u_k, which will be constraint (C10) of the above problem. Therefore, (<ref>) is converted to max_x,η∑_k∈ K^S∑_u_k∈ U^S_k ln( L(p-q*(a_u_kη_u_k)^-r)/v_u_k) s.t. (C1),(C3), (C5'),(C6”'),(C7), (C10): (<ref>). Then we can use convex optimization method for SCA iteration to solve (<ref>) by using standard CVX tools<cit.>. §.§ Algorithm Design and Analysis As mentioned above, we decompose the original NP-hard problem (<ref>) into three subproblems. Then we use the idea of the greedy algorithm to iterate the above solutions of three subproblems and arrive at the suboptimal solution for (<ref>), which is summarized in Algorithm 1. In Algorithm 1, we adopt alternating iteration of three problems and obtain the solutions in closed forms by convex optimization. According to the greedy algorithm and convex optimization theory <cit.>, iteration of three subproblems can ensure | N^q-N^q-1|≤θ, i.e., convergence quickly but only sub-optimality can be guaranteed <cit.>. As we show above, the complexity of Algorithm 1 depends on three subproblems. In subproblem 1, since (<ref>) is solved through SCA iteration by converting constraints, we assume the number of iterations is L_sub1, then the complexity is 𝒪((UN)^2L_sub1). In subproblem 2, since (<ref>) is a convex optimization problem, the complexity is 𝒪(U). In subproblem 3, (<ref>) need to be converted to (<ref>) through SCA and achieve solution in iteration algorithm, we assume the number of iterations is L_sub3, therefore the complexity is 𝒪(UL_sub3). We assume the number of total iteration is L_it, then the overall complexity of Algorithm 1 is 𝒪(((UN)^2L_sub1+U+UL_sub3)L_it). In the way, the NP-hard optimization problem (<ref>) is decomposed into low-complexity subproblems and iteratively solved. § SIMULATION RESULT In this section, we first set the simulation paraments and then show our simulation results to evaluate the performance of our proposed algorithm. §.§ Simulation Parameters We consider system level simulation of uplink transmission in a small cell heterogeneous cellular scenario according to the 3GPP normative document of small cell network, i.e., urban micro (UMi) model <cit.>, which is considered to be deployed in separate channels from MBS and we adopt the hexagonal cell deployment mode. The distance between the user and SBS meet the standard of the 3GPP normative document which indicates that there is no interference between SBS and MBS, and only outdoor links exist. In our model, we consider that four SBSs are deployed in a small cell area with a total coverage of 200m× 200m. The SBSs provide offloading association and resource allocation for users. Suppose that there are LoS and NLoS links in our scenario. Let d_u_k be the distance between SBS k and user u_k, and note that the path loss depend on the link state of LoS and NLoS <cit.>. Therefore, when it is a LoS link, the path loss of user u_k is given by μ^LoS_u_k = 22.0log_10(d_u_k) + 28.0 + 20log_10(F^q), and when it is a NLoS link, the path loss is given by μ^NLoS_u_k = 36.7log_10(d_u_k) + 22.7 + 26log_10(F^q), where F^q is the carrier frequency. The LoS probability that determines the link state is denoted as p^LoS_u_k = min( 18/d_u_k,1) ( 1 - e^-d_u_k/36) + e^-d_u_k/36, and the NLoS probability is p^NLoS_u_k = 1 -p^LoS_u_k. Therefore, according to <cit.> the channel gain in small cell network in this scenario is denoted as g_u_k = ( p^LoS_u_k10^μ^LoS_u_k + p^NLoS_u_k10^μ^NLoS_u_k) ^-1. In our system model, we consider two processes including computing accuracy and parallel computing for computing tasks. The paraments of them are set according to the most suitable fitting paraments <cit.>. Simulation paraments are summarized in Tabel II<cit.>. According to the computing delay and accuracy requirements of some services of ultra reliable low latency communications <cit.>, we assume there are three task types in our simulation and the requirements are different. The delay and accuracy limits of tasks are shown in Tabel III. §.§ Performance of the Proposed Algorithm In order to verify the performance of the proposed algorithm, we add the following schemes for comparison: * Fixed Channel (FC): The scheme is that subcarriers are allocated fixed and bandwidth is allocated evenly. * Average Computing (AC): The scheme is that computing capacity of MEC servers is allocate averagely. * Without Compression Ratio (WCR)<cit.>: According to the scheme in <cit.>, the compression ratio and parallel computing is not considered and computing offloading is processed directly. We demonstrate the convergence of all schemes in Fig. 2 and we can see that the convergence of our proposed algorithm is fast in L_it iterations and the trend is basically fixed after convergence, which means our algorithm based SCA and iteration have a good stability and the astringency. From the convergence of comparison algorithm, we find that our proposed algorithm can acquire a better value of system utility and better optimization character in our system model considering joint allocation of communication resource and computing capacity. The characteristics of system utility with total number of users U under different bandwidth, i.e., 10 MHz and 50 MHz, as Fig. 3 shows. It is found that the system utility increases with the total number of users and the trend is slower when total number of users is greater than 35 in our proposed algorithm. When total number of users is relatively small, the resources are sufficient and resource allocation is efficient, therefore the system utility increases quickly. Nevertheless, as total number of users is relatively large, the resources of system is limited and resource allocation will become inefficient, the growth tendency of system utility will slow down. The comparison schemes all have this property but the trend is not notable, which is different for different algorithms. For example, the trend of FC is the least significant because the fixed allocation of communication resource result in inefficient resource allocation and is less affected by resource limits. We can see in this figure that the higher bandwidth has larger impact in our proposed algorithm than other comparison schemes which means our scheme have higher usage in bandwidth. We compare system utility with computing capacity F_k of MEC servers under different bandwidth in Fig. 4. From the trend we can see that there is a inflection point of system utility in F_k = 200 Gigacycle/s in our proposed algorithm. This is because we consider the computing accuracy limit in our system model, the system utility depends on computing accuracy and task delay and our proposed algorithm need make a trade-off between them. We can get a better trade-off when F_k is relatively small and reaches a certain value while increasing. However, the communication resource will be limited and affects the compression ration and limits computing accuracy when F_k continues to rise, therefore users would choose local computing which result in a slowdown in growth of the system utility. This property also presents in comparison algorithms FC and AC with different inflection points, but in WCR where compression ratio is not considered, the trade-off does not existed while F_k is increasing. Also, we can see that higher bandwidth do not have a significant impact on this trend of system utility. The trend of system utility varying with parallel parameter Q under different bandwidth is represented in Fig. 5 which is used to evaluate the property of parallel computing of classification intelligent tasks. We can find that the system utility decreases as the Q increases in our proposed algorithm, which means parallel computing has a important impact on our system model. In our system, The increase in Q means that classification intelligent tasks will request more parallelism, which will cause that the computing delay increases and have a impact on total offloading process. Therefore, the system utility would decrease because the trade-off between computing accuracy and task delay is affected. This property also presents in comparison algorithms but it is not that significant in AC, which is because the average allocation of computing capacity would reduce the impact of Q. The trend of WCR is not represented because parallel computing is not considered. Moreover, the higher bandwidth have some influence on this trend but not notable in our system model. We plot the trade-off between system revenue and negative system cost vs. L in Fig. 6, where SR means system revenue and SC means negative system cost. Note that the negative system cost of our proposed algorithm is the largest which means system cost is the smallest, which make the representation of Fig. 6 more intuitional. The system utility in (<ref>) indicates that our proposed algorithm can balance system revenue, which is based on computing accuracy, and system cost, which is based on task delay. In this figure we can see that our proposed algorithm can get a better trade-off than comparison algorithm while L increases. It is obvious that the decrease on system revenue and increase on system cost while L is growing in our proposed algorithm, but this trend is not significant in other comparison schemes. Therefore, our proposed algorithm can balance revenue on computing accuracy and cost on task delay significantly to stabilize the increase of the system utility, and can acquire a better trade-off to make systems increasingly stable compared with comparison algorithms. § CONCLUSION In this paper, we investigated computing offloading and resource allocation for intelligent tasks in MEC systems. Specially, we focus on classification intelligence tasks and formulate an optimization problem that considers both the accuracy requirements of tasks and the parallel computing capabilities of MEC systems. We decomposed it into three subproblems: subcarrier allocation, computing capacity allocation and compression offloading which were solved iteratively through convex optimization and successive convex approximation. Based on the solutions, we design an efficient computing offloading and resource allocation algorithm for intelligent tasks in MEC systems. We focus on the specific intelligent tasks but our method can apply to other applications. Simulation results have demonstrated that our proposed algorithm significantly improves the performance of intelligent tasks in MEC systems and achieves a flexible trade-off between system revenue and cost considering intelligent tasks compared with the benchmarks. 99 IEEEtran ref32 Y. Zheng, T. Zhang, R. Huang and Y. Wang, “Computing Offloading and Semantic Compression for Intelligent Computing Tasks in MEC Systems," 2023 IEEE Wireless Commun. Netw. Conf. (WCNC), Glasgow, United Kingdom, 2023, pp. 1-6. ref1 H. Xie and Z. Qin, “A lite distributed semantic communication system for internet of things," IEEE J. Sel. Areas Commun., vol. 39, no. 1, pp. 142-153, Nov. 2020. ref2 B. Gu, F. Hu and H. Liu, “Modelling classification performance for large data sets," International Conf. Web-Age Information Management, pp. 317-328, Springer, Berlin, Heidelberg, 2001. ref3 D. Bruneo, “A stochastic model to investigate data center performance and QoS in IaaS cloud computing systems," IEEE Trans. Parallel Distrub. Syst., vol. 25, no. 3, pp. 560-569, Mar. 2013. ref4 C. Wang, C. Liang, F.R. Yu, and Q. Chen and L. Tang, “Computation offloading and resource allocation in wireless cellular networks with mobile edge computing," IEEE Trans. Wireless Commun., vol. 16, no. 8, pp. 4924-4938, May. 2017. ref5 Y. Ding, Y. Feng, W. Lu; S. Zheng, N. Zhao, L. Meng, A. Nallanathan and X. Yang, “Online Edge Learning Offloading and Resource Management for UAV-Assisted MEC Secure Communications,"IEEE J. Sel. Topics Signal Process., vol. 17, no. 1, pp. 54-65, Jan. 2023. ref6 T. Zhang, Y. Wang, Y. Liu, W. Xu and A. Nallanathan, "Cache-Enabling UAV Communications: Network Deployment and Resource Allocation," IEEE Trans. Wireless Commun., vol. 19, no. 11, pp. 7470-7483, Nov. 2020. ref10 J. Feng, Q. Pei, F. R. Yu, X. Chu, J. Du, and L. Zhu, “Dynamic Network Slicing and Resource Allocation in Mobile Edge Computing Systems," IEEE Trans. Veh. Technol., vol. 69, no. 7, pp. 7863-7878, Jul. 2020. ref8 J. Y. Hwang, L. Nkenyereye and N. M. Sung, “IoT service slicing and task offloading for edge computing," IEEE Internet Things J., vol. 44, no. 4, pp. 1-14, Apr. 2020. ref9 W. Lu, Y. Mo, Y. Feng, Y. Gao, N. Zhao, Y. Wu and A. Nallanathan, “Secure Transmission for Multi-UAV-Assisted Mobile Edge Computing Based on Reinforcement Learning," IEEE Trans. Netw. Sci. Eng., 2022. ref7 T. Zhang, Y. Xu, J. Loo, D. Yang and L. Xiao, "Joint Computation and Communication Design for UAV-Assisted Mobile Edge Computing in IoT," IEEE Trans. Industr. Inform., vol. 16, no. 8, pp. 5505-5516, Aug. 2020. ref31 S. Zhu, T. Yu, T. Xu, H. Chen, S. Dustdar, S. Gigan, D. Gunduz, E. Hossain, Y. Jin, F. Lin and B. Liu, “Intelligent Computing: The Latest Advances, Challenges, and Future,” Intelligent Computing, vol. 2, p. 0006, Jan, 2023. ref15 X. Ran, H. Chen, X. Zhu and J. Chen, “Deepdecision: A mobile deep learning framework for edge video analytics," IEEE INFOCOM 2018-IEEE Conf. Comp. Commun., pp. 1421-1429, Apr. 2018. ref16 S. Han, H. Shen, M. Philipose, S. Agarwal, A. Wolman and A. Krishnamurthy, “Mcdnn: An approximation-based execution framework for deep stream processing under resource constraints," Proceedings of the 14th Annual International Conf. Mob. Syst., Applications, and Services, pp, 123-136, Jun. 2016. ref17 M. Jankowski, D. Gündüz and K. Mikolajczyk, “Wireless image retrieval at the edge," IEEE J. Sel. Areas Commun., vol. 39, no. 1, pp. 89-100, Nov. 2020. ref18 X. Xu, Y. Ding, S.X. Hu, M. Niemier, J. Cong, Y. Hu and Y. Shi, “Scaling for edge inference of deep neural networks," Nature Electronics, vol. 1, no. 4, pp. 216-222, Apr. 2018. ref11 H. Xie, Z. Qin, G. Y. Li, and B. H. Juang. “Deep learning enabled semantic communication systems," IEEE Trans. Signal Process., vol. 69, pp. 2663-2675, Apr. 2021. ref12 M. Bianchini and F. Scarselli, “On the complexity of neural network classifiers: A comparison between shallow and deep architectures," IEEE Tran. Neural Netw. Learn. Syst., vol. 25, no. 8, pp. 1553-1565, Jan. 2014. ref13 D. Justus, J. Brennan, S. Bonner and A. S. McGough, “Predicting the computational cost of deep learning models," IEEE International Conf. Big Data, pp. 3873-3882, Dec. 2018. ref14 W. Fan, Z. Chen, Z. Hao, F. Wu and Y. Liu, “Joint Task Offloading and Resource Allocation for Quality-Aware Edge-Assisted Machine Learning Task Inference," IEEE Trans. Veh. Technol., doi: 10.1109/TVT.2023.3235520. ref19 Y. Kim, H.-W. Lee, and S. Chong, “Mobile computation offloading for application throughput fairness and energy efficiency,” IEEE Trans. Wireless Commun., vol. 18, no. 1, pp. 3-19, Jan. 2019. ref30 W. Fan, Z. Chen, Z. Hao, Y. Su, F. Wu, B. Tang and Y.A. Liu. “DNN Deployment, Task Offloading, and Resource Allocation for Joint Task Inference in IIoT," IEEE Trans. Industr. Inform., Jul. 2022. ref20 D. Fooladivanda and C. Rosenberg, “Joint resource allocation and user association for heterogeneous wireless cellular networks,” IEEE Trans. Wireless Commun., vol. 12, no. 1, pp. 248-257, Jan. 2013. ref28 Y. Yang, M. Pesavento, S. Chatzinotas and B. Ottersten, “Successive convex approximation algorithms for sparse signal estimation with nonconvex regularizations," IEEE J. Sel. Topics Signal Process., vol. 12, no. 6, pp. 1286-1302, Oct. 2018. ref21 S. Boyd, “Convex optimization problems,” Lecture slides and notes. 2008. [Online]. Available: http://web.stanford.edu/class/ee364a/lectures. html. ref29 M. Grant, S. Boyd, and Y. Ye, “CVX: MATLAB software for disciplined convex programming,” 2014. [Online]. Available: http://cvxr.com/cvx/. ref22 S. Boyd and L. Vandenberghe, Convex Optimization, Cambridge University Press, 2004. ref23 S. Ying, P. Babu, and D. P. Palomar, “Majorization-Minimization Algorithms in Signal Processing, Communications, and Machine Learning," IEEE Trans. Signal Process., vol. 65, no. 3, Feb. 2017. ref24 3GPP, “Technical Specification Group Radio Access Network; Scenarios and requirements for small cell enhancements for E-UTRA and E-UTRAN,” TR 36.932, Release 16, pp. 6-8, Mar. 2022. ref25 3GPP, “Technical Specification Group Radio Access Network; Evolved Universal Terrestrial Radio Access (E-UTRA); Further advancements for E-UTRA physical layer aspects,” TR 36.814, Release 9, pp. 94-96, Mar. 2017. ref26 J. Li, Q. L. Dong and M. Liao, “Study on the Scenarios and Future Development of URLLC,” Mob. Commun., vol. 44, no. 2, pp. 20-24, Dec. 2020. ref27 S. Zarandi and H. Tabassum, “Delay minimization in sliced multi-cell mobile edge computing (MEC) systems,” IEEE Commun. Lett., vol. 25, no. 6, pp. 1964-1968, Jan. 2021.
http://arxiv.org/abs/2307.00717v1
20230703024214
SSC3OD: Sparsely Supervised Collaborative 3D Object Detection from LiDAR Point Clouds
[ "Yushan Han", "Hui Zhang", "Honglei Zhang", "Yidong Li" ]
cs.CV
[ "cs.CV" ]
Low temperature dynamic polaron liquid in a CMR manganite W.-S. Lee July 2, 2023 ========================================================== empty empty Collaborative 3D object detection, with its improved interaction advantage among multiple agents, has been widely explored in autonomous driving. However, existing collaborative 3D object detectors in a fully supervised paradigm heavily rely on large-scale annotated 3D bounding boxes, which is labor-intensive and time-consuming. To tackle this issue, we propose a sparsely supervised collaborative 3D object detection framework SSC3OD, which only requires each agent to randomly label one object in the scene. Specifically, this model consists of two novel components, i.e., the pillar-based masked autoencoder (Pillar-MAE) and the instance mining module. The Pillar-MAE module aims to reason over high-level semantics in a self-supervised manner, and the instance mining module generates high-quality pseudo labels for collaborative detectors online. By introducing these simple yet effective mechanisms, the proposed SSC3OD can alleviate the adverse impacts of incomplete annotations. We generate sparse labels based on collaborative perception datasets to evaluate our method. Extensive experiments on three large-scale datasets reveal that our proposed SSC3OD can effectively improve the performance of sparsely supervised collaborative 3D object detectors. § INTRODUCTION Collaborative perception <cit.> plays an essential role in expanding the perspective of autonomous vehicles by leveraging the interactions among multiple agents <cit.>, which has gained considerable attention both in academia and industry <cit.>. Recent pioneering works have been dedicated to the development of high-quality datasets <cit.>, effective fusion strategies <cit.>, and communication mechanisms <cit.>. Besides, another research line explores how to overcome real-world issues such as latency <cit.> and pose errors <cit.>. Despite great success, these studies on collaborative 3D object detection have adopted a fully supervised learning approach, which heavily relies on large-scale annotated 3D bounding boxes. However, collecting accurate annotations is labor-intensive and time-consuming, particularly for collaborative 3D object detection involving multiple agents. Thus, developing collaborative 3D object detectors that rely on lightweight object annotations is a significant issue in practical applications. While some works have explored weakly supervised 3D object detection in the single agent setting <cit.>, none have explored this learning paradigm in collaborative 3D object detection. To address this research gap, we investigate collaborative 3D object detection in sparse labeling scenarios. In this setting, each agent only needs to annotate one 3D object in a scene, as illustrated on the right side of Fig. <ref>. This annotation strategy substantially reduces the labeling cost of the collaborative perception task, thus helping to extend it to more autonomous driving scenarios. However, sparsely annotated 3D object detection poses new challenges as unlabeled instances can interfere with training, leading the collaborative detector to misclassify some objects as background and resulting in a significant performance decline. To address this challenge, we propose SSC3OD, a sparsely supervised collaborative 3D object detection framework following the lightweight annotation strategy. The proposed framework aims to mitigate the adverse effects of incomplete supervision by designing two effective modules: 1) the Pillar-MAE, a masked autoencoder that pre-trains large-scale point clouds in a self-supervised manner and generates representative 3D features for unlabeled LiDAR point clouds; 2) an instance mining module that mines positive instances and generates pseudo labels for collaborative detectors online. By enhancing the 3D perception ability of the collaborative detector and mining high-quality positive instances, these two modules effectively improve the performance of sparsely supervised collaborative detectors. To evaluate our approach, we first generate sparse labels for collaborative perception datasets in autonomous driving, including OPV2V <cit.>, V2X-Sim <cit.> and DAIR-V2X <cit.>. In contrast to fully supervised collaborative detectors, our SSC3OD model only necessitates annotating roughly 4% of objects. Subsequently, we perform a comprehensive evaluation of LiDAR-based collaborative 3D object detection. Both quantitative and qualitative results demonstrate that our proposed SSC3OD significantly improves the performance of sparsely supervised collaborative 3D object detection. To summarize, our contributions are as follows: * We propose a novel framework for sparsely supervised collaborative 3D object detection from the LiDAR point clouds. To the best of our knowledge, this is the first study to investigate collaborative 3D object detection in sparse labeling scenarios. * This work proposes two effective modules: Pillar-MAE, which reasons over high-level semantics in a self-supervised manner, and an instance mining module, which generates high-quality pseudo labels for collaborative detectors online. * We generate sparse-labeled training sets for three large-scale collaborative perception datasets and conduct extensive experiments to evaluate the effectiveness of our proposed model. The experimental results demonstrate that our approach can significantly enhance the performance of sparsely supervised collaborative detectors. § RELATED WORK §.§ Collaborative Perception Collaborative perception, an application of multi-agent systems <cit.>, has been widely applied in autonomous driving <cit.>. During the collaborative training, agents exchange information with each other to alleviate occlusion and sensor failure. Depending on the level of transmitted data, existing works on collaborative perception can be categorized into early collaboration (raw data) <cit.>, intermediate collaboration (features) <cit.>, and late collaboration (perception prediction) <cit.>. Besides, some datasets have been published to support research in this area, such as OPV2V <cit.>, V2XSim <cit.> and DAIR-V2X <cit.>. Several studies have explored various aspects of collaborative perception, including fusion strategies <cit.>, communication mechanisms <cit.>, localization errors <cit.>, and latency issues <cit.>. Nevertheless, there is limited research on the labeling cost of collaborative perception in autonomous driving. §.§ Mask Autoencoders for Point Clouds Masked autoencoder (MAE) <cit.> is a straightforward self-supervised technique that learns feature representations by randomly masking patches and then reconstructing the missing pixels. Researchers have attempted to apply this approach to point clouds due to its success in 2D computer vision. Min et al. <cit.> propose Voxel-MAE, a masked autoencoding framework for pre-training large-scale point clouds. This framework uses a range-aware random masking strategy to mask voxels and a binary voxel classification task to learn point cloud representations. Hess et al. <cit.> also introduce a voxel-based masked autoencoder. They reconstruct the masked voxels and distinguish whether they are empty. To address the issue of insufficient data labeling in sparsely supervised learning, we propose a masked modeling method for point clouds called Pillar-MAE. Our approach randomly masks pillars and uses only the 2D encoder to learn features, which is simpler than previous MAE methods <cit.> that rely on either Transformers or 3D encoders. §.§ Weakly/Sparsely Supervised 3D Object Detection LiDAR-based 3D detection relies on large-scale precisely-annotated data. Some recent works have proposed weakly supervised methods to reduce this heavy annotation requirement. Meng et al. <cit.> propose a two-stage weakly supervised 3D object detection framework WS3D. The first stage generates cylindrical proposals based on click-annotated bird's eye view scenes, and the second stage refines the proposals with a few well-labeled instances. Liu et al. <cit.> consider a more extreme annotation situation, sparsely supervised 3D object detection (SS3D), which only annotates one object in a scene. To address the challenge of missing annotation, they propose an instance mining module to mine positive instances and a background mining module to generate high-quality pseudo labels. Collaborative 3D detection requires annotating objects around multiple agents in a scene, which is more time-consuming and labor-intensive than single-agent detection. Therefore, we introduce a highly weakly supervised approach where each agent only annotates one instance in the collaborative scene, called sparsely supervised collaborative 3D object detection. § METHODOLOGY §.§ Preliminary Consider N agents in the autonomous driving scenario, where each agent is equipped with LiDAR and can perceive objects and communicate with each other. Let X_i and Y_i be the observation and the supervision of the ith agent, respectively. The intermediate collaborative 3D object detection works as follows: F_i=f_encoder(X_i), a M_j → i=f_transform(ξ_i,(F_j,ξ_j)), b F_i^'=f_fusion(F_i,{M_j → i}_j=1,2,..,N), c O_i=f_head(F_i^'), d Y_i^'=f_union{Y_i}_i=1,2,..,N , e where f_encoder, f_transform, f_fusion, f_head and f_union represent feature encoder, pose transformation module, feature fusion module, detection head and supervision union module, respectively. F_i represents the features extracted from the point cloud observed by ith agent, ξ_i=(x_i,y_i,z_i, θ_i, ϕ_i, ψ_i) is the pose of the ith agent, M_j → i refers to the jth agent's feature which is projected from the jth agent pose to the ith agent pose. After aggregating other agents' projected features, the ith agent's fusion feature is denoted as F_i^', and the detection output of the fusion feature is represented by O_i. The objective of collaborative 3D object detection is to minimize the detection loss L_det between the perception output O_i and the collaborative supervision Y_i^', which is generated by merging the supervision of N agents as shown in step (<ref>). It's trivial to train a collaborative detection model when each agent has complete supervision Y_i. However, incomplete supervision can disrupt the model training in sparse-labeled scenarios due to the missing annotated instances. This work aims to reduce the impact of missing annotations by: 1) training the encoder in a self-supervised manner before step (<ref>) and 2) identifying missing instances after step (<ref>). §.§ The SSC3OD Framework The proposed SSC3OD is a general framework that learns robust 3D representations and mines missing positive instances for sparsely supervised collaborative detectors. As illustrated in Fig. <ref>, it comprises a collaborative detector, the Pillar-MAE module, and the instance mining module. The collaborative detector integrates features from multiple agents to expand the ego vehicle's field of view, and the instance bank is initialized with sparse labels, storing the collaborative detector's targets. Before training the collaborative detector, we use the Pillar-MAE (section <ref>) to pre-train the encoder in a self-supervised manner, which endows the encoder with a powerful 3D representation ability. Subsequently, we load the pre-trained encoder into the collaborative detector, which is trained with the instance bank and then acts as the teacher collaborative detector. Finally, based on the teacher collaborative detector's predictions, we mine missing instances online and merge them into the instance bank (section <ref>), which is used to retrain the collaborative detectors. This learning approach effectively improves the performance of the proposed sparsely supervised collaborative 3D detectors. §.§ Pillar-MAE Module We first introduce Pillar-MAE, a masked autoencoder that pre-trains large-scale point clouds in a self-supervised manner. This approach is based on the PointPillars encoder <cit.>. Given the observed point clouds, Pillar-MAE randomly masks the pillars and reconstructs their occupancy values with an autoencoder network, which helps the network generate representative 3D features. After that, we detailly introduce the main components of Pillar-MAE: random masking, autoencoder, and reconstructing targets. Random Masking: Given the observation X_i of the ith agent, the Pillar-MAE is used to generate point pillars first. For the point clouds with range W× H× D along the X× Y× Z axes, the pillar size is v_W × v_H× D, resulting in n_v occupied pillars that contain points. We then randomly mask non-empty pillars according to the mask ratio r_m and generate the occupancy label T of the reconstruction task. In the occupancy label, the value of occupied and empty pillars are 1 and 0, respectively. Autoencoder: The autoencoder consists of a 2D encoder and a 2D decoder. Following PointPillars, we transform the unmasked pillars into pseudo images and use a 2D convolutional encoder to extract features. Our decoder consists of a lightweight 2D deconvolution layer, which transforms the encoded features to the original size of the pillars. Reconstructing Target: Based on decoded features, we predict occupied pillars P and adopt a simple binary cross entropy loss as occupancy loss L_occ: L_occ=-1/b∑_i=1^b∑_j=1^n_lT_j^i logP_j^i where b is the batch size, P_j^i is the probability of jth pillar of the ith training sample, and T_j^i is the corresponding ground truth whether the pillar contains point cloud. With Pillar-MAE, the encoder is compelled to acquire high-level features for reconstructing the masked occupancy distribution of the 3D scene using only a small number of visible pillars. Subsequently, we utilize the pre-trained encoder to initialize and fine-tune the collaborative detectors with the sparse-labeled dataset. §.§ Instance Mining Module Although the pre-trained encoder has strong representation ability, the detector still has difficulty in distinguishing some foreground instances from the background due to the lack of complete supervision. To further enhance the perceptual ability of the detector, we design an instance mining module to mine unlabeled instances. Specifically, we utilize the trained detector as a teacher collaborative detector to guide the instance mining module in identifying missing instances. The instance mining module employs both score-based and IoU-based filtering mechanisms. The score-based filtering selects high-confidence predictions, while the IoU-based filtering leverages Non-Maximum Suppression (NMS) to eliminate overlapping detections. To ensure the quality of the mined instances, we set a relatively high score threshold τ_cls and a relatively low IoU threshold τ_IOU. The high-quality pseudo labels can be generated by applying these two filtering mechanisms. To emulate real-world scenarios, we randomly select the ego vehicle in the training stage of collaborative detection, which results in varying input and detection targets in each epoch. As it is not feasible to save mined instances offline in this situation, built upon knowledge distillation, we employ a teacher collaborative detector to guide the learning of collaborative detectors. The process involves mining instances online and merging them into the instance bank. Algorithm <ref> outlines the pipeline of the instance mining module. § EXPERIMENTS We conduct experiments on three large-scale collaborative perception datasets consisting of both real-world and simulated scenarios involving two types of agents. We choose three types of intermediate collaboration methods, including traditional <cit.>, graph-based <cit.> and attention-based <cit.> fusion methods. The input considered is solely LiDAR, and we assume an ideal collaborative perception scenario without latency or pose error. The LiDAR-based collaborative 3D object detection performance is measured with Average Precisions (AP) at Intersection-over-Union (IoU) thresholds of 0.3, 0.5 and 0.7. §.§ Datasets DAIR-V2X <cit.> is the first real-world vehicle-to-infrastructure (V2I) collaborative perception dataset, which contains a vehicle and a roadside unit in each cooperative frame. We adopt the complete cooperative annotation from CoAlign <cit.> and set the LiDAR range as x∈[-100m,100m],y∈[-40m,40m]. OPV2V <cit.> is a simulated vehicle-to-vehicle (V2V) collaborative perception dataset, which is collected with the co-simulating framework OpenCDA <cit.> and CARLA simulator <cit.>. It contains one to five vehicles in each cooperative frame, and we set the LiDAR detection range as x ∈ [-140m,140m],y ∈ [-40m,40m]. V2X-Sim <cit.> is a simulated vehicle-to-everything (V2X) collaborative perception dataset. It is generated with traffic simulation SUMO <cit.> and CARLA simulator <cit.>, including 100 scenes with 10,000 frames divided into 8,000/1,000/1,000 for training/validation/testing. We use V2XSim 2.0 and set the LiDAR range as x∈[-32m, 32m], y∈[-32m,32m] for collaborative 3D detection. We expand the sparse labeling of the original collaborative perception dataset by randomly retaining a single annotated object of each agent in every 3D scene from the training set. As shown in Tab. <ref>, we count the number of frames in training sets for three datasets, along with the number of both full and sparse labels, and compute the proportion of sparse labels to all objects. The sparse subsets require annotation of only around 4% of objects, in contrast to the complete annotation of all objects in the original training set. This demonstrates the cost-saving effect of the proposed sparse annotation strategy. §.§ Implementation Details We adopt PointPillars <cit.> with the grid size of [0.4m,0.4m] as our backbone. During training, we randomly select an autonomous vehicle (AV) as the ego vehicle, whereas a fixed ego vehicle is employed for detector evaluation during testing. We adopt single-scale intermediate fusion and training with Adam optimization. The training epoch of Pillar-MAE is 25, and the training epoch of collaborative detectors are 20, 30, and 20 on DAIR-V2X, OPV2V, and V2X-Sim, respectively. We set the mask ratio r_m in Pillar-MAE as 0.7. For instance mining module, we set the score threshold τ_cls and IOU threshold τ_IOU as 0.3 and 0.15, respectively. All models are trained on RTX A4000. We choose three novel intermediate collaborative detection methods: F-Cooper <cit.>, AttFusion <cit.> and DiscoNet <cit.>. F-Cooper employs a straightforward maxout method to fuse features, whereas AttFusion introduces a single-head self-attention fusion module. DiscoNet combines matrix-edge valued weight with early collaboration based knowledge distillation to capture feature relationships. To ensure a fair and impartial comparison, we only employ the graph fusion module from DiscoNet, omitting the early collaborative knowledge distillation component. §.§ Quantitative Results To validate the performance of the proposed framework, we train the collaborative detectors with three distinct strategies, 1) training from scratch on the full-labeled training set, 2) training from scratch on the sparse-labeled training set, and 3) training with the SSC3OD framework on the sparse-labeled training set. Tab. <ref> shows the performance of detectors trained with three strategies. Compared with the fully supervised collaborative detector, the performance of the sparsely supervised collaborative detector drops more than 10% when trained from scratch. This decline can be attributed to most positive instances remaining unlabeled in sparsely supervised scenarios, causing the model to identify them as background. However, training the sparsely supervised collaborative detector with our SSC3OD framework (w/ SSC3OD) yields significantly improved performance, indicating the framework's effectiveness in mitigating incomplete annotation impact. §.§ Ablation Studies This section presents a series of ablation studies to analyze modules' contributions in SSC3OD. Tab. <ref> displays the performance of collaborative detectors using various settings on sparse-labeled V2XSim. We observe that: i) directly utilizing the instance mining module (IM) on the collaborative detector does not significantly improve performance due to the unreliable results of sparsely supervised collaborative detectors trained from scratch. ii) The performance of the collaborative detector loaded with the pre-trained encoder has been significantly improved, thereby demonstrating the effectiveness of the Pillar-MAE module (PM) in enhancing the point cloud perception ability of the encoder. iii) Using IM after a PM-based collaborative detector leads to further improvements in the performance of collaborative detectors, revealing that the PM-based collaborative detector provides reliable detection results and IM mines high-quality pseudo labels. Then, we analyze the impact of the mask ratio r_m in Pillar-MAE. We train the Pillar-MAE with different r_m values (0.5, 0.7 and 0.9) and then utilize corresponding pre-trained encoders to train collaborative detectors. Tab. <ref> illustrates the effect of the mask ratio in Pillar-MAE. We observe that the PM-based collaborative detector achieves optimal performance when r_m=0.7. This is because the model under the small r_m retains more point cloud information, and the occupancy of the point cloud can be reconstructed without too much perception ability. Conversely, a large r_m leads to excessive loss of point cloud information, compromising effective point cloud perception. In addition, we analyze the impact of the score threshold τ_cls in the instance mining module. We regard the collaborative detector trained with the pre-trained encoder as the teacher collaborative detector and set different values of τ_cls to generate pseudo labels. As the classification score of the sparsely supervised collaborative detector falls between 0.2 and 0.35, we set τ_cls to 0.2, 0.25 and 0.3. Tab. <ref> demonstrates a consistent enhancement in detection performance when τ_cls is set to 0.3. When τ_cls is set to 0.2 or 0.25, the pseudo labels may contain more false positive instances. Therefore, we choose τ_cls=0.3 to ensure high-quality pseudo labels. §.§ Qualitative Results This section presents a qualitative evaluation of the proposed module. We show the detection results of the fully supervised collaborative detectors (FS), sparsely supervised collaborative detectors trained from scratch (SS), sparsely supervised collaborative detectors trained with Pillar-MAE (SS + PM), and sparsely supervised collaborative detectors trained with Pillar-MAE and instance mining module (SS + PM + IM). Fig. <ref> illustrates the collaborative detection results in different settings. Detectors trained in the FS setting achieve high-confidence detection results. Conversely, detectors trained in the SS setting exhibit relatively low-confidence detection results due to missing positive instance labels. Collaborative detectors trained in the SS + PM setting recognize more foreground and reduce false positives, indicating the encoder's powerful 3D perception ability. Moreover, adding the instance mining module to detectors (SS + PM + IM) enhances object localization accuracy and substantially increases confidence scores. This demonstrates the effectiveness of the instance mining module in producing high-quality pseudo labels. § CONCLUSION This work introduces SSC3OD, a sparsely supervised collaborative 3D object detection framework. The proposed framework comprises two key modules: the Pillar-MAE and the instance mining module. The Pillar-MAE enhances the 3D perception ability of encoders in a self-supervised manner, while the instance mining module generates high-quality pseudo labels for collaborative detectors online. The sparse labels for collaborative perception datasets are generated to evaluate the proposed framework. Extensive experiments demonstrate that SSC3OD significantly improves the performance of sparsely supervised collaborative 3D object detection. IEEEtran
http://arxiv.org/abs/2307.02116v1
20230705084339
Tunnel-coupled optical microtraps for ultracold atoms
[ "Shangguo Zhu", "Yun Long", "Wei Gou", "Mingbo Pu", "Xiangang Luo" ]
cond-mat.quant-gas
[ "cond-mat.quant-gas", "physics.atom-ph", "quant-ph" ]
http://arxiv.org/abs/2307.01167v1
20230703172208
Lattice Thermal Conductivity of 2D Nanomaterials: A Simple Semi-Empirical Approach
[ "R. M. Tromer", "I. M. Felix", "L. F. C. Pereira", "M. G. E. da Luz", "L. A. Ribeiro Junior", "D. S. Galvão" ]
cond-mat.mtrl-sci
[ "cond-mat.mtrl-sci", "cond-mat.mes-hall", "00-XX", "J.2; I.6" ]
[email protected] Applied Physics Department, State University of Campinas, Campinas-SP, 13083-970, Brazil Center for Computing in Engineering & Sciences, Unicamp, Campinas-SP, Brazil Departamento de Física, Universidade Federal do Rio Grande do Norte, Natal-RN, 59078-970, Brazil Departamento de Física, Universidade Federal de Pernambuco, Recife-PE, 50670-901, Brazil [email protected] Departamento de Física, Universidade Federal do Paraná, Curitiba-PR, 81531-980, Brazil [email protected] Institute of Physics, University of Brasília, Brasília-DF, 70910-970, Brazil Applied Physics Department, State University of Campinas, Campinas-SP, 13083-970, Brazil Center for Computing in Engineering & Sciences, Unicamp, Campinas-SP, Brazil Extracting reliable information on certain physical properties of materials, such as thermal transport, which can be very computationally demanding. Aiming to overcome such difficulties in the particular case of lattice thermal conductivity (LTC) of 2D nanomaterials, we propose a simple, fast, and accurate semi-empirical approach for its calculation. The approach is based on parameterized thermochemical equations and Arrhenius-like fitting procedures, thus avoiding molecular dynamics or ab initio protocols, which frequently demand computationally expensive simulations. As proof of concept, we obtain the LTC of some prototypical physical systems, such as graphene (and other 2D carbon allotropes), hexagonal boron nitride (hBN), silicene, germanene, binary, and ternary BNC latices and two examples of the fullerene network family. Our values are in good agreement with other theoretical and experimental estimations, nonetheless being derived in a rather straightforward way, at a fraction of the computational cost. Lattice Thermal Conductivity of 2D Nanomaterials: A Simple Semi-Empirical Approach D. S. Galvão August 1, 2023 ======================================================================================= § INTRODUCTION Two-dimensional (2D) layered crystals are structures typically with strong in-plane chemical bonds and weak out-of-plane van der Waals interactions <cit.>. The interest in these materials has increased since the development of simple techniques to produce high-quality graphene films <cit.>. Indeed, the large applicability of graphene in distinct optoelectronic devices has continuously increased the interest in novel 2D nanomaterials, including the so called groups III <cit.>, IV <cit.>, V <cit.>, VI <cit.>, and VII <cit.>, and their analogues. Further, 2D binary layers, such as hexagonal boron nitride (hBN) <cit.> and other group III nitrides <cit.>, transition metal dichalcogenides (for instance, MoS_2 and WSe_2) <cit.>, and their hybrid in-plane heterostructures (like graphene-hBN and MoS_2-WS_2) <cit.>, have been recently synthesized. As for metals and alloys, when compared to 2D structures, the formation of 3D ones is often energetically favored due to the non-directional metallic bonding. However, recent synthetic developments have overcome this limitation and have made possible the synthesis of different metallic nanosheets with well-defined 2D shapes <cit.>. The unique physical-chemical properties of 2D systems, especially in the nanoscale domain, make them good candidates to advance the current scenario of flat optoelectronics <cit.>. Among these features, lattice thermal conductivity (LTC) stands out as a critical parameter establishing the energy conversion efficiency associated with thermoelectric effects <cit.>. Regarding the LTC experimental determination <cit.>, the experiments typically consider suspended micro-bridge <cit.>, 3 ω <cit.>, time-domain thermoreflectance <cit.> and Raman spectroscopy <cit.> techniques. From the theory point of view, the most common approaches for the LTC rely on the Boltzmann transport equation <cit.>, via ab initio calculations <cit.>, Green's functions <cit.>, and molecular dynamics (MD) simulations <cit.>. Despite the success of these methods, they are computationally expensive, which poses limitations to extensive LTC analyses of 2D nanomaterials and their potential applications. Therefore, faster and simpler ways to estimate LTC for 2D nanomaterials are of great importance. With this goal, we propose here a straightforward protocol to obtain the LTC for 2D nanomaterials using semi-empirical approaches, combining thermochemical equations with direct Arrhenius-like fittings. We illustrate the efficiency of this novel approach considering representative 2D systems, such as graphene (and other 2D carbon allotropes), hBN, silicene, germanene, binary and ternary BNC latices, and 2D-qHC_60 and 2D-C_36 (from the fullerene network family). Our results are in good agreement with theoretical and experimental values in the literature, at a fraction of the computational cost. § THE METHOD We start highlighting that thermochemical equations, parameterized to molecules and solids and implemented in the Molecular Orbital PACkage (MOPAC16) <cit.>, are the core of our semi-empirical approach to estimating LTC in 2D nanomaterials. MOPAC16 is a quantum chemistry program based on Dewar and Thiel's NDDO approximation. MOPAC codes are well-known for producing reliable results for small molecules and biomolecules. Recently, it has also been used to describe some aspects of 2D crystals <cit.>. For instance, the vibrational modes in 2D crystals are often (but not always) confined in a plane. Therefore, the degrees of freedom of large molecular systems are essentially those in a 2D crystal. This motivates us to use MOPAC16 to address the lattice thermal conductivity of 2D materials. However, it should be taken into account that flexural vibrational modes can dominate the LTC of 2D systems <cit.>. For our purposes, the relevant thermochemical quantities are the vibrational part of the heat capacity at constant pressure and the normal mode frequencies. Thus, from MOPAC16 output (see details in Sec. <ref>) we should extract two types of quantities. (a) The positive and non-degenerated modes ω_n's (n=1, …, N). Most of the quantum chemistry codes indicate the ω_n's usually in cm^-1, if in Hz ν_n = c ω_n, with c = 29.98 × 10^9 cm/s. (b) The vibrational component of the heat capacity at constant pressure C_p.VIB(T) in cal/(mol K). We provide all the details on how to apply the method and obtain the thermal conductivity of graphene in the YouTube link: https://youtu.be/qwuxWuP-uVs. The heuristic (and elementary) reasoning for our LTC semi-empirical formula is as follows (for a more elaborated first principles treatment see, e.g., <cit.> and the references therein). We start recalling the Fourier law in 3D, or J_i = κ_i j (∇ T)_j, with J_i the heat current ([J] = W m^-2) in the i direction, (∇ T)_j the temperature gradient ([∇ T] = K m^-1) in the j direction and κ_i j the i j element of the heat conductivity tensor ([κ] = W m^-1 K^-1). The 1D version of the above equation is trivial, but a 2D form is usually not directly derived. Therefore, we need to calculate an effective κ in terms of proper averages and a limit process (refer to the analysis in <cit.>). This is the scheme we consider next. We write J = (J_x+J_y)/2, for J_x≈κ_x x δ T/δ_x + κ_x y δ T/δ_y + κ_x z δ T/δ_z and J_y≈κ_y x δ T/δ_x + κ_y y δ T/δ_y + κ_y z δ T/δ_z. Notice that we assume the same temperature variation δ T along each short characteristic distance δ_i along directions i=x,y,z. Now, we phenomenologically relate the heat current J to the delivered power W across the effective area L δ_z, representing a kind of average of the areas δ_x δ_z (normal to J_y) and δ_y δ_z (normal to J_x). Thus J = W/(L δ_z) and consequently W/L δ_z ≈ [(κ_x x + κ_y x)/2 1/δ_x + (κ_x y + κ_y y)/2 1/δ_y + (κ_x z + κ_y z)/2 1/δ_z] δ T, W/L δ T ≈ [(κ_x x + κ_y x)/2 δ_z/δ_x + (κ_x y + κ_y y)/2 δ_z/δ_y + (κ_x z + κ_y z)/2]. The tensor elements κ_i j with i,j ≠ z — being quantities with units proportional to area^-1 and describing a process normal to the direction z — should scale inversely with the distance δ_z. Hence, for δ_z → 0 we suppose the product κ_i j δ_z to be well behaved and finite. Moreover, in such limit, we also expect κ_i z and κ_z j to vanish. In this way, we introduce the ad hoc expression κ_L = lim_δ_z → 0(κ_x x + κ_y x)/2 δ_z/δ_x + (κ_x y + κ_y y)/2 δ_z/δ_y, thus, we finally have κ_L = W/(L δ T). For our 2D materials, its natural to take δ_x = l_x and δ_y = l_y the lattice lengths in the x- and y-directions and then simply set L = (l_x+l_y)/2. Further, for the collection of vibrational phonon mode frequencies ω_n directly from MOPAC16 we define ω̅= 1/N ∑_n=1^Nω_n. We likewise denote the average energy of these modes as E_VIB. This readily provides an estimation for the power term in Eq. (<ref>), as W ≈ν̅ E_VIB, where ν̅ = c ω̅. Combining all these results together, we obtain (at room temperature) κ_L(300) = ν̅× E_VIB/L ×δ T. In principle, the temperature variation parameter δ T (in K) must be distinct in each specific situation. We discuss its estimation in Sec. <ref>. The energy E_VIB (in J) can be computed through an Arrhenius-like equation relating it to the vibrational part of the heat capacity at constant pressure <cit.>. In fact, for C_p,VIB calculated from MOPAC16, we have (for k_B the Boltzmann constant in J K^-1) C_p,VIB(T) = 𝒦 exp[- E_VIB/2 k_B T]. Above, 𝒦 is only an free parameter, interpreted as C_P at the limit of very high T, but not really relevant for our purposes. From Eq. (<ref>), it follows that ln[C_p,VIB(T)] = ln[𝒦] - (E_VIB/2 k_B) (1/T). Therefore, ln[C_p,VIB] versus T^-1 is a straight line with a negative slope α = -E_VIB/(2 k_B), and the desired energy term follows. We remark that for 2D materials, an Arrhenius-like relation tend to give good fittings for the general dependency of thermal quantities (like conductivity and heat capacity) on the energy of vibrational modes and temperature <cit.>. This is exactly the case for the 2D nanomaterials discussed in the present work. §.§ The estimation of δ T In order to estimate the δ T temperature parameter in Eq. (<ref>), we have considered extensive tests and calibrations for a large number of groups of 2D materials (see below). From such procedure, we have found a rule of thumb (in the spirit of a semi-empirical approach) for their numerical values in Kelvin: (1) δ T = 15 for materials with large pores, like Ene-yne Graphyne <cit.>, or with buckling, such as germanene <cit.>, silicene <cit.>, pentagraphene <cit.>, and MoS_2 <cit.>. (2) δ T = 3 (1+|Z_A-Z_B|) (provided |Z_A-Z_B| ≤ 2) for two chemical species, where Z_C is the atomic number of species C = A, B. Examples are hBN <cit.>, NHG <cit.>, carbon nitride <cit.>, and BC systems <cit.>. (3) δ T=3 for other types of 2D nanomaterials, such as graphene <cit.>, phagraphene <cit.>, and diboron porphyrin <cit.>. (4) For graphene-like structures satisfying conditions (2) or (3) above, but for which also the number of bond types N_DB > 1 (and having six atoms in the unit cell), the previous δ T values must be divided by the factor (2 + N_DB). (5) δ T = 75 for 2D fullerene-like networks. The above scheme leads to reasonable values for the lattice thermal conductivity of several systems, as we show next. Nonetheless, an alternative approach, based on machine learning ideas, has also been examined, and it is presented in the Appendix <ref>. Finally, a third possibility is briefly mentioned in the Conclusion. §.§ Some computational technical details In order to estimate the vibrational part of the heat capacity at constant pressure and the normal modes, MOPAC16 requires three keywords: thermo = (200,600), let and geo-ok. The first determines the temperature range, from 200 K to 600 K, and the second is a safety check, imposing that the calculations should be performed even for non-stationary conditions. The third relates to the system size, avoiding any halt for small lattice parameters. Indeed, for small unit cells, such as graphene with two atoms and basis vectors smaller than 4.0 Å, it is necessary to add the keyword geo-ok to increase the quality of the results. For our LTC calculation scheme, there is no need to run a geometry optimization in MOPAC16. One can use 2D structures derived from other MD- or DFT-based software and/or experimental data as input. This does not alter the accuracy of our method, as it will become clear from the examples next. Moreover, we consider only the positive phonon frequencies and their degeneracy does not need to be taken into account. The total number of phonon frequencies generated depends on the parameters assumed in the computations. The Parametric Method number 7 (PM7) was the first semi-empirical protocol successfully tested to model crystal structures and to obtain the heat of formation of solids <cit.>. Within the PM7 parameterization, MOPAC16 can produce imaginary frequencies for 2D crystals. Other procedures, such as AM1 <cit.>, tend to yield fewer imaginary frequencies than PM7. Nonetheless, very few positive modes (sometimes even a single one) suffice for a reasonable estimation of the LTC. § RESULTS In the following, we demonstrate the efficiency of our semi-empirical method by discussing distinct materials of interest. To emphasize the influence of the 2D topologies in establishing the LTC values, we present our calculations in an increasing order of complexity regarding system morphology, thus addressing successively: single-species and flat layers (e.g., graphene), binary and flat layers (e.g., hBN), buckled lattices (e.g., silicene and germanene), porous lattices (e.g., ene-yne graphyne), large unit cells with different carbon rings (e.g., phagraphene), binary and ternary flat nanomaterials with different stoichiometries (e.g., BC and BCN), supercells of different sizes (e.g., BC_3), and the fullerene networks 2D-qHC_60 and 2D-C_36. Naturally, the analyzed systems have different parameters, which demand distinct parametric methods, and lead to different thermochemical results. To indicate the processes in a clearer way, some of them are explicitly mentioned in the respective sections. Furthermore, table <ref> presents a list of relevant information regarding all examples considered in this work. §.§ Graphene - Single-Species with a Flat Layer Graphene is an all-carbon flat hexagonal lattice structure. Its unit cell (inset panel of Figure <ref>(a)) contains two atoms. Moreover, l_x=l_y=2.46 Å so that L=2.46 Å. Figure <ref>(a) shows the heat capacity at constant pressure as a function of temperature, calculated with MOPAC16 at the PM7 level. The associated Arrhenius-like plot, as described in the previous Section, is presented in Figure <ref>(b). It is worth mentioning that MOPAC16 takes only a few seconds to perform the thermochemical calculations in a personal laptop with a single processor and does not require much memory. Also, we do not need to optimize the graphene unit cell obtained from the Computational 2D Materials Database (C2DB) <cit.>. Finally, the same simulation run yields the ω_n's as well as the vibrational part of C_p, VIB. From the Arrhenius fitting we obtain E_VIB=4.51× 10^-20 J. In this case, we have 3 N_atom-3 = 3 modes, with ω_1 = ω_2 = 1549.0 cm^-1 and ω_3=1870.7 cm^-1. Hence ω̅=1709.5 cm^-1 by disregarding one of the degenerate ω_1 = ω_2. From our list of δ T's, the numerical value to be inserted into Equation (<ref>) is 3 K. Therefore, the estimation of graphene's LTC at room temperature is κ_L(300)=3084.6 W/mK, which is in very good agreement with other experimental <cit.> and theoretical <cit.> results. Remarkably, our approach demands only a few seconds to obtain this value. We should remark that just as a test, we have performed the calculation including the degenerate frequencies and the results remain the same. §.§ Hexagonal Boron Nitride - Flat Layer with Binary Species Plots similar to the previous ones, but for hBN, are shown in Figure <ref>. Contrasting with graphene, now we have one negative (actually, imaginary) frequency, ω_1=-1105.7 cm^-1 and ω_2=ω_3=963.3 cm^-1. By discarding the negative frequency ω_1 and one degenerate frequency, we obtain ω̅=963.3 cm^-1. The δ T parameter comes from rule 2 in Section <ref>, or δ T= 3 (1+|7-5|)= 9 K, where Z_N=7 and Z_B=5. For L see Table <ref>. Thus, for hBN at room temperature, κ (300)=289.2 W/mK, matching independent experimental <cit.> and theoretical <cit.> estimations. It took only 1.8 seconds of calculation in MOPAC16 with a single run. Furthermore, although the semi-empirical method is not parameterized with significant accuracy for boron <cit.> – even producing inconsistencies in the hBN geometry – the LTC value calculated here is close to those from other methods, such as DFT-based Boltzmann transport equation <cit.>. §.§ Silicene and Germanene - Buckled Lattices Silicon-based systems are also structures for which optimization processes, at the semi-empirical level, can lead to inconsistencies in 2D geometries. In fact, only negative frequencies are obtained by employing parametric methods such as PM7, PM6, and PM3. However, an older parametric method, AM1, produces positive phonon frequencies. Therefore, for silicene, AM1 has been our choice in MOPAC16 calculations. For silicene, Figures <ref>(a) and <ref>(b) show the heat capacity at constant pressure as a function of temperature and the related Arrhenius-like fitting, respectively. Since silicene has a buckling atomic arrangement, we set δ t=15 K according to rule 1 discussed in the previous Section. Thus, the calculated LTC value at room temperature is κ (300)=10.1 W/mK (taking less them 1.0 seconds for the calculation). This value is very close to 9.4 W/mK, obtained from theoretical works in the literature <cit.>. We also calculated the LTC for germanene (the heat capacity versus T and the related Arrhenius-like curve are not shown). In this case, only the MNDO parametrization produces positive phonon modes. For some other parameters, see Table <ref>. From simulations taking less than 1.0 seconds to run, we obtained κ(300)=10.0 W/mK. This value coincides with the ab initio computations reported in the literature <cit.>, which nevertheless are very time-consuming since they need to numerically integrate the Boltzmann transport equation. §.§ Ene-yne Graphyne - Large Porous Structures Recently, several novel 2D carbon allotropes have been either synthesized or theoretically predicted <cit.>. Among the latter, the Ene-yne Graphyne stands out due to its structure with large pores <cit.>. Figures <ref>(a) and <ref>(b) display the heat capacity at constant pressure as a function of temperature, calculated at the PM7 level and its related Arrhenius-like fitting. Since Ene-yne Graphyne presents large pores, we use δ T=15 K according to rule 1 presented in the Section <ref>. For other parameters, see Table <ref>. The calculated LTC is κ (300)=14.9 W/mK (15.5 W/mK if we uss AM1). The calculation takes approximately 21.0 seconds in MOPAC16 with a single run. The ab initio results in the literature vary in a relatively broad range, from 3.0 W/mK to 10.0 W/mK <cit.>. Although there are clear discrepancies for the LTC values in the literature, the semi-empirical estimation also indicates a small LTC value for the Ene-yne Graphyne. §.§ Phagraphene - Large Unit Cell and Different Carbon Rings We also applied our protocol to a quasi-planar carbon allotrope named Phagraphene <cit.>. This theoretically proposed material is composed of sp^2-like hybridized carbon atoms with a 5-6-7 sequence of fused rings. Its binding energy (-9.03 eV/atom) is rather close to that of graphene (-9.23 eV/atom) <cit.>. Figures <ref>(a) and <ref>(b) show the heat capacity at constant pressure as a function of temperature, calculated at the PM7 level and its related Arrhenius-like trend. The Phagraphene unit cell considered here (see the inset panel in Figure <ref>(a)) is an orthorhombic lattice with 20 atoms. Here, due to its fair similarity to graphene, we heuristically assume δ T=3 K. The calculated LTC at room temperature is κ (300)=196.7 W/mK, taking approximately 34.0 seconds in MOPAC16. Our κ is 21.6% smaller than that reported in the literature (251.5 W/mK) <cit.>. Nonetheless, we remark this is a reasonable value given the very crude estimation for δ T based solely on graphene. §.§ BC and BCN Hexagonal 2D Lattices - Binary and Ternary Flat Nanomaterials with Different Stoichiometries Interesting classes of 2D materials are BC and BCN hexagonal lattices formed by carbon, boron, and nitrogen <cit.>. These structures have hexagonal unit cells with eight atoms, as illustrated in Figure <ref> for three particular species (from left to right: BC_3 containing only boron and carbon atoms, BC_6N-1, and BC_6N-2, the latter two also containing nitrogen atoms). The LTC of these materials was investigated in<cit.>. BC_3, BC_6N-1, and BC_6N-2 have, correspondingly, N_DB = 2, N_DB = 3 and N_DB = 4. Therefore, for BC_3 we consider rule 2 combined with rule 4, yielding δ T=1.5 K. For the other two cases, we used rules 3 and 4, thus that δ T = 0.6 for BC_6N-1 and δ T = 0.5 for BC_6N-2. In this way, at room temperature, we obtain LTC values of 467.7 W/mK, 1213.3 W/mK, and 1588.6 W/mK for BC_3, BC_6N-1, and BC_6N-2, respectively. They agree with those in the literature, namely, 410 W/mK, 1080 W/mK, and 1570 W/mK <cit.>. All the calculations took approximately 5.0 seconds in MOPAC16 with a single run. §.§ 2D qHC_60 - The Fullerene Network Family As a final example application, we considered the 2D quasi hexagonal C_60, qHC_60, structure – and the associated 2D-C_36, see below. Both structures belong to promising (for applications) families of 2D networks resulting from fullerene (C_60) and fullerene-like molecules. In fact, qHC_60 is the first synthesized example of such materials, produced from C_60 and magnesium <cit.>. A supercell containing 120 atoms was used to investigate the electric and optical properties of the 2D qHC_60 <cit.>. For the analysis here, to minimize the computational cost, we use a supercell composed of 60 atoms. Figure <ref> illustrates the specific heat versus temperature and the Arrhenius-like plot. We applied our method to 2D-qHC_60 with the parameters shown in Table <ref> and δ T=75 K from rule 5. We obtained a value of 6.1 W/mK for the LTC, which is reasonably close to the reported value of 4.3 W/mK <cit.>. Notably, our calculations for 2D-qHC_60 were completed in less than 2 minutes. In addition, we used the same δ T and the parameter values listed in Table <ref> to calculate the LTC for the 2D-C_36, a network theoretically predicted in <cit.>. Our approach yielded κ(300) = 7.7 W/mK, in fair agreement with the literature value of 9.8 W/mK<cit.>. § FINAL REMARKS AND CONCLUSION In this contribution, we have proposed a straightforward and computationally inexpensive semi-empirical theoretical approach to obtain the LTC of 2D nanomaterials. The framework avoids time-consuming molecular dynamics and/or ab initio calculations. For a particular 2D system, our method first extracts its average vibrational energy E_VIB from an Arrhenius-like fitting, relating E_VIB to the vibrational part of specific heat at constant pressure C_p,VIB. Then, from E_VIB and the material corresponding vibrational mode frequencies ω_n we use Eq. (<ref>) to obtain κ. The thermochemical quantities C_p,VIB and ω_n are obtained from the MOPAC16 software. The necessary temperature parameter δ T in Eq. (<ref>) is taken from a list of standard values described in Sec. <ref>, estimated for each group of 2D materials sharing specific common characteristics. For validation, we have studied some representative 2D materials, such as graphene (and other 2D carbon allotropes), hexagonal boron nitride (hBN), silicene, germanene, binary, and ternary BNC lattices and fullerene networks. Regarding the obtained results, some final remarks are in order. As we can see from Table <ref>, overall, our protocol leads to reasonable estimations of the LTC for most of the considered materials, with the great advantage of employing simple and fast calculations when compared to more standard procedures. As already discussed, in the present approach the only parameter which somehow must be phenomenologically estimated through distinct means is δ T. In fact, the set of values in Section <ref> represents averages for collections of 2D systems. Of course, assuming a “typical" δ T may give rise to discrepancies. Note that in the case of Graphenylene – having a unit cell of 12 atoms – our prediction of 206 W/mK is just one-third of the reference 600 W/mK. While for T-Graphene – 4 atoms per unit cell – our 434 W/mK is around half the reference 800 W/mK Both use rule 3, δ T = 3 K, which incidentally for graphene leads to a very good value. On the other hand, the same δ T = 75 K for both 2D-qHC_60 and 2D-C_36 yield fair results, also with good levels of precision, namely, a difference between our calculations and the literature of 29.5% for the former and 21.4% for the latter (see Table <ref>). Therefore, although our approach already constitutes a valuable tool to investigate the LTC of 2D nanomaterials, additional improvements associated with determining δ T is possible. Related to the protocol in Sec. <ref> (for an alternative scheme, see the discussion in the Appendix <ref>), we can mention two. - Refining the set of rules in Sec. <ref> by further sub-dividing the present groups of 2D systems. Consequently, we would have a larger number of sub-cases and thus of δ T values. - To explicitly calculate δ T, also following semi-empirical approaches. Along this line, one strategy — presently under investigation — is to set δ T = E_normal/k_B, for E_normal the lowest normal vibrational mode energy of an effective molecule represented by the lattice unit cell. The vibrational length can be estimated from the thermal expansion of the 2D material <cit.>. Hopefully, the obtained results will be reported in the near future. § ACKNOWLEDGEMENTS We would like to thank M. H. F. Bettega for fruitful discussions about vibrational modes of small molecules. This work was financed by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) - Finance Code 001, Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq), FAP-DF, and FAPESP. We thank the Center for Computing in Engineering and Sciences at Unicamp for financial support through the FAPESP/CEPID Grants #2013/08293-7 and #2018/11352-7. L.A.R.J acknowledges the financial support from FAP-DF grants 00193-00000857/2021-14, 00193-00000853/2021-28, and 00193-00000811/2021-97, and CNPq grants 302922/2021-0 and 350176/2022-1. L.A.R.J. gratefully acknowledges the support from ABIN grant 08/2019 and Fundação de Apoio à Pesquisa (FUNAPE), Edital 02/2022 - Formulário de Inscrição N.4. L.A.R.J. acknowledges Núcleo de Computação de Alto Desempenho (NACAD) and for providing computational facilities. This work used resources of the Centro Nacional de Processamento de Alto Desempenho em São Paulo (CENAPAD-SP). M. G. E. da Luz acknowledges research grants from CNPq (304532/2019-3) and from project “Efficiency in uptake, production and distribution of photovoltaic energy distribution as well as other sources of renewable energy sources” (Grant No. 88881.311780/2018-00) via CAPES PRINT-UFPR. The authors acknowledge the National Laboratory for Scientific Computing (LNCC/MCTI, Brazil) for providing HPC resources of the SDumont supercomputer, which have contributed to the research results reported within this paper. URL: http://sdumont.lncc.br. § AN ALTERNATIVE WAY TO OBTAIN Δ T AND SOME PRELIMINARY RESULTS A potentially reliable way to estimate δ T is by means of machine learning (ML) protocols. The strategy is to use linear regression in association with statistical analyses in order to derive proper values of δ T for groups of 2D materials. Indeed, based on known data, we selected a list of numerical values for quantities related to properties already characterized elsewhere, including some Boolean — yes: 1 / no: 0 — for the presence or not of a given feature. This, of course, includes previously calculated κ_L's. The specific quantities considered (for a collection of twenty different systems) are: κ_L, average frequency, vibration energy, lattice length, buckling status, porousness, fullerene presence, number of species, different bond numbers, and number of atoms in the unit cell. For the concrete searching of δ T (which we call δ T^ML), we used a tool implemented in Python, relying on scikit-learning routines <cit.>. By its turn, scikit-learning is based on ordinary least square (OLS) linear regression. Briefly, for y the target variable (our δ T^ML) and x_1, x_2, …, x_p the predictor variables (the known parameters from the database), the OLS finds the best set of coefficients b_0, b_1, b_2, …, b_p, allowing us to estimate y from y=b_0 + b_1 x_1 + b_2 x_2 + … +b_p x_p. Once we have determined { b } = {b_0, b_1, b_2, …, b_p}, we can easily obtain δ T^ML for a new material from Eq. (<ref>) and the corresponding predictor variables. From the above scheme, we analyzed the materials presented in Table <ref>, showing the associated δ T^ML's and resulting κ_L^ML's. For comparison, we also list the δ T's and the related κ_L's from the rules in Sec. <ref>, as well as the exact values of δ T^ref which would yield, from Eq. (<ref>), the κ_L^ref's in the literature (see text for discussions). It is relevant to observe that the overall discrepancy between our κ_L's with those assumed as references (cf, Table <ref>) is of 37% and 55% employing, respectively, the rules in Sec. <ref> and the ML method. We speculate that the larger difference from the ML is due to the small database considered here of only twenty systems. We expect that increasing the number of materials considered to generate { b } should considerably improve the results.
http://arxiv.org/abs/2307.01734v1
20230704141104
Billiards with Spatial Memory
[ "Thijs Albers", "Stijn Delnoij", "Nico Schramma", "Maziyar Jalaal" ]
nlin.CD
[ "nlin.CD", "cond-mat.soft", "nlin.PS", "physics.bio-ph" ]
Hard X-ray grazing incidence ptychography: Large field-of-view nanostructure imaging with ultra-high surface sensitivity J. W. Andreasen ======================================================================================================================== Many classes of active matter develop spatial memory by encoding information in space, leading to complex pattern formation. It has been proposed that spatial memory can lead to more efficient navigation and collective behaviour in biological systems and influence the fate of synthetic systems. This raises important questions about the fundamental properties of dynamical systems with spatial memory. We present a framework based on mathematical billiards in which particles remember their past trajectories and react to them. Despite the simplicity of its fundamental deterministic rules, such a system is strongly non-ergodic and exhibits highly-intermittent statistics, manifesting in complex pattern formation. We show how these self-memory-induced complexities emerge from the temporal change of topology and the consequent chaos in the system. We study the fundamental properties of these billiards and particularly the long-time behaviour when the particles are self-trapped in an arrested state. We exploit numerical simulations of several millions of particles to explore pattern formation and the corresponding statistics in polygonal billiards of different geometries. Our work illustrates how the dynamics of a single-body system can dramatically change when particles feature spatial memory and provide a scheme to further explore systems with complex memory kernels. keywords: Active Matter | Memory | Mathematical Billiard | Chaos | Pattern Formation In cognitive psychology, spatial memory refers to the ability to remember and mentally map the physical spaces in the brain <cit.>. It is an essential process in spatial awareness and to reach optimized navigation through complex environments, either for a taxi driver in London to find the fastest route <cit.> or for a mouse to quickly find food in a maze <cit.>. In fact, a variety of species with different levels of complexity, from honey bees <cit.>, and ants <cit.>, to birds <cit.>, bats <cit.>, and human <cit.> share this cognitive feature. Spatial memory, however, can also be achieved externally: in contrast to a cognitive map (where information is stored internally in the brain), the information is encoded in space itself, and then retrieved when the organism re-encounters it. Such memory can potentially enable collective behaviour in groups and optimize cost on the organismal level <cit.>. External spatial memory is often mediated by chemical trails and, generally speaking, could be attractive (self-seeking) or repulsive (self-avoiding). Some species of bacteria are attracted to the bio-chemical trails they leave behind and by that, they form emergent complex patterns  <cit.>. Examples of self-avoiding spatial memory can also be found in the slime mold Physarum polycephalum — a eukaryotic multinucleated single cell — which forms spatial memory by leaving extracellular slime at the navigated location while searching for food. The slime then acts as a cue and the cell avoids those regions which have been explored already (also see <cit.>). Other biological examples can be found in epithelial cell migration when cells modify their external environment by reshaping their extracellular matrix or by secreting biochemical signalling cues <cit.>. The self-avoiding spatial memory is not limited to living systems, but can also be observed in physico-chemically self-propelled particles that actively change the energy landscape in which they manoeuvre. An example is auto-phoretic active droplets which move due to interfacial stresses caused by surface tension gradients <cit.>. Active droplets leave a chemical trail behind as they move around and avoid these trails due to the local change in concentration gradients. Similar self-avoiding behaviour had been observed in other self-propelling active particles such as spider molecules <cit.> and even nano-scale surface alloying islands <cit.>. Understanding and predicting the dynamics of active systems with memories is a difficult task. Most experimental systems are highly nonlinear and include probabilistic features that are often time and material dependent. Additionally, the interaction with the boundaries presents more complexities. Here, we ask the question of how a dynamical system with self-avoiding memory behaves in two dimensions? We present a fully deterministic model with minimal ingredients for motile particles with spatial memory. We report that even such a simple single-body dynamical system exhibits chaos and complex interactions with boundaries, resulting in anomalous dynamics and surprisingly highly-intermittent behavior. Consider a classical billiard: a mass-less point-particle moves ballistically on a closed two-dimensional domain Ω⊂ℝ^2. The particle has a constant speed and does not experience any frictional/viscous dissipation. When reaching a boundary ∂Ω, the particle follows an elastic reflection, i.e., the angle of incidence is equal to the angle of reflection. For over a century, mathematical billiards of various shapes have been studied by physicists and mathematicians to understand dynamical systems and geometries related to various problems, from the theory of heat and light <cit.>, and (often Riemannian) surfaces <cit.>, to chaos in classical, semi-classical and quantum systems <cit.>. Here, we present a billiard with memory. In contrast to classical billiards, the particle continuously modifies the topology of the billiard table, creating spatial memory. We consider the simplest type of self-avoiding spatial memory: the particle reflects on its own trajectories from the past and avoids them in the same way it reflects on the boundaries (see figure <ref>a and supplementary video 1). This Self-Avoiding Billiard (SAB) features a series of interesting properties. First, it fundamentally lacks periodic orbits (closed geodesics), as the particle cannot follow its past. Second, the particle presents a continuous-time dynamical system with self-induced excluded-volume (see figure <ref>c). This means, in the long term, the particle reduces the size of its domain by consecutive intersections, i.e., Ω̃(t) → 0, where Ω̃(t) = ∫_Ωd𝐱. Hence, in a SAB, particles almost always have a finite total length (or lifetime) ℒ and eventually trap themselves in singular points in space and time. We refer to this long-time behavior as the arrested state. Finally, the topology of a SAB is not fixed as the generated spatial memory dynamically (and dramatically) changes the topology of the surface in a non-trivial manner. Consider a square. The topological equivalent surface of a classic square billiard is a torus (easily obtained via the process of unfolding <cit.>). A self-avoiding particle generates a singular point at t=0, the moment it is introduced inside the square. As it begins to move (t>0), the singularity (now a line) extends inside the domain, resulting in surfaces with topological genus greater than 1 <cit.>. At some point, the particle forms a new closed domain which is most likely an irrational polygon with an unidentified topological equivalent surface (see figure <ref>c). Importantly, the topological change results in anomalous transport of particles and memory-induced chaos: a small change in the initial condition of the particle can drastically change their trajectories as time grows. We demonstrate this in an example shown in figure <ref>b. The trajectory of the two initially close particles with the same initial angle suddenly separates at the point close to their initial conditions. This bifurcation leads to significantly different trajectories, which eventually self-trap at a distance r_f from each other (see video 2). One effective way to categorize the trajectories and demonstrate the chaos in SAB is to record the incident vector of each particle, ℳ_{ p_i }. Such that boundaries of the polygons with 𝒩 edges are labelled as A_j, where j ∈ [1,𝒩] and the segments of the trajectories are labelled as l_i, where i ∈ [1, ∞) (see figure <ref>c). Two initially closed particles have the same incident vector until the bifurcation moment. This is shown in figure <ref>d next to the variation of the effective area Ω̃ around two initially close particles (same as in figure <ref>b) and their distance r. The values of Ω̃ drop every time the particle traps itself in a new polygon, and clearly, Ω̃→ 0 and r → r_f as t →∞. Less evident is the probability of r_f for a pair of close particles that are randomly placed in a billiard. Figure <ref>e shows the ensemble-averaged probability density function of r_f for polygons of N ∈ [3,8]. While the majority of particles stay close to each other, many end up at larger distances, sometimes more than 100 times the initial one. The probability of a larger final distance r_f decays like a power-law, approximated as ρ∼ r_f^-p, where p≈1.4-1.7 (with a cut-off length set by the maximum length possible in a polygon). The origin of the power-law behaviour is yet unclear to us. This class of chaos observed in SAB shares similarities to the concepts of pseudo, weak or slow chaos <cit.>, where singular topological features change the fate of initially close particles, e.g., in Ehrenfest billiard <cit.> or billiards with barriers <cit.>. A major difference here is that the singular features are induced by the particles themselves and hence depend on the initial particle conditions and the shape of the billiard. Given the chaotic and self-trapping nature of SAB, a natural question arises: where in space is a particle likely to become trapped? And how does this likeliness change as the (initial) geometry of the billiard change? To answer these questions, we study self-avoiding rational polygonal billiards with different numbers of edges 𝒩∈ [3,∞). To this end, we perform computer simulations of 10^8 particles with random initial position vector (𝐱_0, ϕ_0), where 𝐱_0 is the position vector inside Ω and ϕ_0 is the initial angle (see appendix <ref> for mixing and illumination tests). The particles do not interact; hence the present results are all for a single-body system (see appendix <ref> for the detail of numerical implementation). Figure 2a shows the probability density function of self-trapped locations 𝐱_f when t →∞ for the triangular billiard (𝒩=3). The chaotic properties of SAB results in highly complex and rich patterns. This is associated with sets (modes) of trajectories, some short-lived and some extremely long-lived. This can be seen in the distribution of the total length of the trajectories ℒ, shown in figure 2b (see appendix <ref> for the statistics of the line segments). For simplicity, we analyze this highly intermittent distribution in 5 different regions of total length (an alternative could be to look at sets of particles with similar incident vectors, ℳ). Region I belongs to short-lived particles 0<ℒ≲ 3 where the particle self-traps quickly after the movement begins. The majority of these particles trap near the triangular edges. The distribution in region II is significantly different. Particles in this region move for longer distances and form complex structures inside the billiard. These structures suggest the presence of multiple modes of trajectories. Regions I and II include about 60% (29.4% and 29.3%, respectively) of all particles. The rest are particles with a higher lifetime, featuring a heavy-tailed distribution (see the inset in fig <ref>b). Particles in these regions generally don't end up near the vertices and efficiently use the available space without self-trapping. The rare cases (extreme events) of ultra-long trajectories occur in region V. The long lifetime of these particles is a result of a zigzag motion between two almost parallel lines which were previously formed by the particle. Some of these trajectories are 20 times longer than the average trajectory length. However, the probability of their formation is less than 0.03%. The spatial distribution of the self-trapping positions highly depends on the initial shape of the billiard. Figure <ref>a shows the arrested states for various regular polygons, from a square to a nonagon, where the polygons are constructed by choosing 𝒩 equidistributed points on a unit circle. The total length distribution of these geometries and also polygons with a higher number of vertices are shown in figure <ref>b. The 𝒩-fold symmetric final patterns clearly vary with the geometry of the billiard. But a few features seem to be universal. Particles in even polygons tend to trap more near the vertices and also have a higher chance of a long lifetime (ℒ), since the polygon itself features parallel walls, allowing for zigzag bounces. In contrast, the self-trapping probability in the centre is higher for odd polygons, and the probability of a long lifetime is low. Notably, the triangle is the only polygon for which highly likely arbitrarily small orbits are possible since it has interior angles smaller than π / 2. The results presented here illustrate the complex nature of dynamical systems with spatial memory. In contrast to previous studies on active particles with memory (e.g., those used in <cit.>), the current deterministic framework, based on mathematical billiards, employs extremely simple microscopic rules without noise or particle interaction. Yet, complex patterns and anomalous transport emerge due to memory-induced topological changes. We found that ballistic particles with spatial memory self-trap and exhibit topology-induced chaos. These dynamical characteristics make it non-trivial to predict the long-time asymptotic behavior of the system. Nonetheless, this limit can be accessed through numerical simulations. As a dynamic system, a Self-Avoiding Billiard (SAB) fundamentally differs from classic billiards because the surface on which particles flow evolves over time, and the shape of the polygon almost always morphs into an irrational one, which is considerably more challenging to treat mathematically. Nevertheless, the initial shape of the polygon governs the final arrested state, as demonstrated in figure <ref>. There are several immediate opportunities to extend the findings of this work. Billiards of different geometries in elliptic or hyperbolic systems (e.g., stadium or Sinai billiards), exhibit fundamentally different ergodic and chaotic behavior. Combining spatial memory with such billiards complements the present study. Additionally, in biological or physicochemical systems, spatial memory often dissipates over time as chemical trails diffuse. This introduces another timescale t_m. The ratio of the particle's convective timescale to the fading timescale (known as the Péclet number in hydrodynamics) governs the dynamics of the system. In the current study, t_m →∞, indicating permanent memory. However, for finite values of t_m, one may observe both self-trapping and cage breaking, leading to different long-time behavior. Moreover, the particle's reaction to its memory and the boundaries represents another control parameter. Here, we consider the simplest form of elastic collision for all interactions. Inelastic collisions <cit.> or probabilistic collisions can significantly alter the system's dynamics. The study of many-body SAB is also of particular interest since particle interactions can take various forms, including reciprocal and non-reciprocal interactions <cit.>. Furthermore, in terms of practical applications, given the simplicity of the rules employed in this study, spatial memory could be utilized to optimize autonomous robotic systems <cit.> and active matter <cit.>, especially when combined with learning techniques <cit.>. § SUPPLEMENTARY MATERIALS §.§ Numerical implementation The algorithm below is the pseudocode used to perform the SAB simulations. Ω is the interior of the billiard table, with the boundary given by ∂Ω. p represents the particle. It has a position and velocity which we choose uniformly random at the beginning of a simulation. W is a wall object, which is a line segment with a normal vector n̂. M is the matrix used to calculate the time till collision t with a certain wall, and the parametric position on the wall s. The numerical code is implemented in Julia <cit.> (also see <cit.>). §.§ Statistics of segments and angles The introduction of self-avoiding memory significantly changes the statistics of line segments (each individual collision) and their incident angles (see figure <ref>). Memory leads to a large probability of small segments due to local collisions around the self-trapping point and significantly reduces the change of any segment in the order of the polygon's length. The distribution of incident angles is also peculiar with various jumps. Although a The probability of angles close to π/2 is much higher in SAB, due to the formation of non-triangular billiards during self-entrapment. Note that, although the probability of angles in a classic billiard (no memory) is attainable via relatively easy geometrical consideration, we yet to find a good geometrical description for SAB. §.§ Mixing and illumination The self-trapping and finite lifetime of the particles in SAB leads to inherently non-ergodic dynamics. Consequently, the system shows an anomalous weakly chaotic mixing characteristic as particles reach the arrested state. A visual representation of such a behaviour can be seen in figure <ref> (see video 3). A collection of initially closely-distanced particles (in the circle in the centre of the triangle) flow and eventually self-trap in a set of locations. The final mixing state depends on the initial condition (and the shape of the polygon), here shown by changing the initial angle of motion (shown by 6 different vectors in figure <ref>b). The same features (self-trapping and chaos), similarly, result in complex entrapment of light in illumination problems <cit.>. In an illumination problem, a light source is placed a particular location and one observes which part of the room is illuminated and which part remains dark. To this end, the particle trajectories and ray traces are equivalent. Figure <ref> exhibit two examples of illumination in a triangular SAB (see videos 4 and 5). Placing the light source at a vertex of the triangle results in a complex shape of a trapped positions (figure <ref>a). Hence, in log-term, everywhere inside a triangular SAB is dark, except at these positions. Such final illumination is even more complex with severals small structures when the light source is placed in the centre of the triangle edge (figure <ref>b). § SUPPLEMENTARY VIDEOS Video 1: Examples of classic and self avoiding billiards for a particle with the same initial conditions. Video 2: Topology-induced chaos in a self-avoiding billiard: two initially close particles separate from each other and self-trap themselves at a larger distance. Video 3: Anomalous mixing in a self-avoiding billiard. A droplet of particles reach an arrested state after interacting with self-created spatial memory. Video 4: The illumination tests in a triangular billiard tests where the light source is placed at a vertex. 4 trajectories are also shown for clarity. Video 5: The illumination tests in a triangular billiard tests where the light source is placed between two vertices. 4 trajectories are also shown for clarity. § ACKNOWLEDGEMENTS The authors would like to thank Alvaro Marin, and Clélia De Mulatier for insightful discussions.
http://arxiv.org/abs/2307.03306v2
20230706213818
When Fair Classification Meets Noisy Protected Attributes
[ "Avijit Ghosh", "Pablo Kvitca", "Christo Wilson" ]
cs.LG
[ "cs.LG", "cs.CY" ]
Northeastern University Boston USA [email protected] Northeastern University Boston USA [email protected] Northeastern University Boston USA [email protected] The operationalization of algorithmic fairness comes with several practical challenges, not the least of which is the availability or reliability of protected attributes in datasets. In real-world contexts, practical and legal impediments may prevent the collection and use of demographic data, making it difficult to ensure algorithmic fairness. While initial fairness algorithms did not consider these limitations, recent proposals aim to achieve algorithmic fairness in classification by incorporating noisiness in protected attributes or not using protected attributes at all. To the best of our knowledge, this is the first head-to-head study of fair classification algorithms to compare attribute-reliant, noise-tolerant and attribute-unaware algorithms along the dual axes of predictivity and fairness. We evaluated these algorithms via case studies on four real-world datasets and synthetic perturbations. Our study reveals that attribute-unaware and noise-tolerant fair classifiers can potentially achieve similar level of performance as attribute-reliant algorithms, even when protected attributes are noisy. However, implementing them in practice requires careful nuance. Our study provides insights into the practical implications of using fair classification algorithms in scenarios where protected attributes are noisy or partially available. <ccs2012> <concept> <concept_id>10010147.10010257.10010321</concept_id> <concept_desc>Computing methodologies Machine learning algorithms</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10003456.10010927</concept_id> <concept_desc>Social and professional topics User characteristics</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10002944.10011122.10002945</concept_id> <concept_desc>General and reference Surveys and overviews</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [500]Social and professional topics User characteristics [500]General and reference Surveys and overviews [500]Computing methodologies Machine learning algorithms When Fair Classification Meets Noisy Protected Attributes Christo Wilson August 1, 2023 ========================================================= § INTRODUCTION In October 2022, the White House released the Blueprint for an AI Bill of Rights <cit.>. This document, like other statements of AI principles <cit.>, calls for protections against unfair discrimination (colloquially, fairness) to be deeply integrated into all AI systems. Researchers and journalists have led the way in this area, both in terms of identifying unfairness in real world systems <cit.>, and in the development of machine learning (ML) classifiers that jointly optimize for predictive performance and fairness <cit.> (for a variety of different definitions of fairness <cit.>). Despite the widespread acknowledgment that fairness is a key component of trustworthy AI, formidable challenges remain to the adoption of fair classifiers in real world scenarios—chief among them being questions about demographic data itself. Many classical fair classifiers assume that protected attributes are available at training time and/or testing time <cit.> and that this data is accurate. However, demographic data may be noisy for a variety of reasons, including imprecision in human-generated labels <cit.>, reliance on imperfect demographic-inference algorithms to generate protected attributes <cit.>, or the presence of an adversary that is intentionally poisoning demographic data <cit.>. To attempt to deal with these issues, researchers have proposed noise-tolerant fair classifiers that aim to achieve distributional fairness by incorporating the error rate of demographic attributes in the fair classifier optimization process itself <cit.>. In other instances demographic data may not be available at all, which violates the assumptions of both classical and noise-tolerant fair classifiers. This may occur when demographic data is unobtainable (laws or social norms impede collection <cit.>), prohibitively expensive to generate (when large datasets are scraped from the web <cit.>), or when laws disallow the use of protected attributes to train classifiers (direct discrimination <cit.>). For cases such as these, researchers have proposed demographic-unaware fair classifiers that use the latent representations in the feature space of the training data to reduce gaps in classification errors between protected groups, either via assigning higher weights to groups of training examples that are misclassified <cit.>, or by training an auxiliary adversarial model to computationally identify regions of misclassification  <cit.>. Motivated by this explosion of fundamentally different fair classifiers, we present an empirical, head-to-head evaluation of the performance of 14 classifiers in this study, spread across four classes: two unconstrained classifiers, seven classical fair classifiers, three noise-tolerant fair classifiers, and two demographic-unaware classifiers. Drawing on the methodological approach used by <cit.> in their comparative study of classical fair classifiers, we evaluate the accuracy, stability, and fairness guarantees (defined as the equal odds difference) of these 14 classifiers across four datasets as we vary noise in the protected attribute (sex). To help explain the performance differences that we observe, we calculate and compare the feature importance vectors for our various trained classifiers. This methodological approach enables us to compare the performance of these 14 algorithms under controlled, naturalistic circumstances in an apples-to-apples manner. Based on our head-to-head evaluation we make the following key observations: * Two classical fair classifiers, one noise-tolerant fair classifier, and one demographic-unaware fair classifier performed consistently well across all metrics on our experiments. * The best classifier for each case study showed some variability, confirming that the choice of dataset is an important factor when selecting a model. * One demographic-unaware fair classifier was able to achieve equal odds for males and females under a variety of ecological conditions, confirming that demographics are not always necessary at training or testing time to achieve fairness. We release our source code and data[The code and data for replicating this paper can be found at <https://github.com/evijit/Awareness_vs_Unawareness>] so that others can replicate and expand upon our results. We argue that large-scale, head-to-head evaluations such as the one we conduct in this study are critical for researchers and ML practitioners. Our results act as a checkpoint, informing the community about the relative performance characteristics of classifiers within and between classes. For researchers, this can highlight gaps where novel algorithms are still needed (noise-tolerant and demographic-unaware classifiers, based on our findings) and provide a framework for rigorously evaluating them. For practitioners, our results highlight the importance of thoroughly evaluating many classifiers from many classes before adopting one in practice, and we provide a roadmap for choosing the best classifiers for a given real-world scenario, depending on the availability and quality of demographic data. Our study proceeds as follows: in <ref> we present a brief overview of the history of fair models and head-to-head performance evaluation. Next, in <ref>, we introduce the 14 classifiers and the metrics we use to evaluate them for predictive performance and fairness. In <ref> we present our experimental approach, including the datasets we use for our four case studies. In <ref> we present the results of our experiments and we discuss our findings in <ref>. § RELATED WORK We discuss different classes of fair classifiers, their known shortcomings, and how they have been evaluated in the past. §.§ Fair Classifiers <cit.> were one of the first to operationalize the idea of fairness in machine learning classifiers, through their key observation that awareness of demographics is crucial for building models that rectify unfair discrimination and historical inequity. Their work takes the idea of awareness literally, by incorporating protected attributes directly into the model and jointly optimizing for accuracy and fairness. Many subsequent works have built on this foundation by developing versions of classical ML classifiers that incorporate fairness constraints (decision trees, random forests, SVMs, boosting, etc. <cit.>). Collectively, we refer to this class of algorithms as classical fair classifiers. They are now widely available to practitioners <cit.> and have been adopted into real-world systems <cit.>. While classical fair classifiers are an important advance over their unconstrained predecessors, they rely on a strong assumption that data about protected attributes is accurate. Unfortunately, this may not be true in practice. For example, in contexts like finance and employment candidate screening, demographic data may not be available due to legal constraints or social norms <cit.>, yet the need to fairly classify people remains paramount. To bridge this gap, practitioners may infer peoples' protected attributes using human labelers <cit.> or algorithms that take names, locations, photos, etc. as input <cit.>. However, work by <cit.> demonstrates that these inference approaches produce noisy demographic data, and that this noise obviates the fairness guarantees provided by fair models. With these limitations in mind, researchers have begun developing what we refer to as noise-tolerant fair classifiers that, as the name suggests, jointly optimize for accuracy and fairness in the presence of uncertainty in the protected attribute data. Approaches include robust optimization that adjusts for the presence of noise in the fairness constraint <cit.>, adjusting the “fairness tolerance” value for binary protected groups <cit.>, using noisy attributes to post-process the outputs for fairness instead of the true attributes under certain conditional independence assumptions <cit.>, estimating de-noised constraints that allow for near optimal fairness <cit.>, or a combination of approaches <cit.>. Noise-tolerant fair classifiers, like classical fair classifiers, still rely on the assumption that protected attributes are available at training time. As we discuss in <ref>, however, there are many real-world contexts when this assumption may be violated. The strongest such impediment is legal, any inclusion of protected attributes in the classifier would be considered illegal direct discrimination. A different approach for achieving fairness through awareness that is amenable to these strong constraints is embodied by what we refer to as demographic-unaware fair classifiers. These algorithms do not take protected attributes as input, but they attempt to achieve demographic fairness anyway by relying on the latent representations of the training data <cit.>. Thus, this approach to classification still incorporates a general awareness of unfair discrimination and historical inequity without being directly aware of demographics. While demographic-unaware fair classifiers are an attractive solution in contexts where protected attributes are unavailable, practical questions about the efficacy of these algorithms remain. First, because these techniques are unsupervised, it is unclear what groups are identified for fairness optimization. Under what circumstances are demographic-unaware fair classifiers able to achieve fairness for social groups that have been historically marginalized or are legally protected? Conversely, are the groups constructed by demographic-unaware fair classifiers arbitrary and thus divorced from salient real-world sociohistorical context? Second, assuming that demographic-unaware fair classifiers do identify and act on meaningful groups of individuals, how does their performance (in terms of predictions and fairness) compare to classical and noise-tolerant fair classifiers? In this study, our goal is to begin answering these questions about relative performance across all four classes of fair classifiers. §.§ Head-to-Head Evaluation It is standard practice for ML researchers to compare the performance of their novel algorithms against competitors. However, these comparisons are rarely comprehensive, they focus on comparisons with a narrow set of comparable algorithms to demonstrate advances over the state-of-the-art. While these evaluations are crucial for assessing the benefits of new algorithms, they do not paint a complete picture of performance across a variety of different algorithms, spanning both time and fundamental approaches. Benchmark studies address this gap by focusing on the evaluation of a large set of models under expansive and carefully controlled conditions <cit.>. These studies provide important context for the ML field, by identifying models that do not work well in practice, models that have equivalent performance characteristics under a wide range of circumstances, and areas where new models may be needed. To the best of our knowledge, existing benchmark studies focus solely on classical fair classifiers, which motivates us to update their results. Thus, in this study we adopt the methodological approach for evaluation developed by <cit.> and build upon their work by evaluating four different classes of classifiers (both fairness constrained and unconstrained). In this section, we discuss the ongoing evolution of fair machine learning algorithms. We begin by briefly reviewing seminal work on demographic aware fair models. Next, we discuss the real-world practicalities that complicate the adoption of demographic aware fair models, thus motivating the need for demographic unaware fair models. §.§ Fairness Through Awareness <cit.> presented one of the earliest works studying the fairness of algorithmic classifiers. In this paper the authors defined the notion of individual fairness as a distance or metric encoding the outcome difference between individuals—the source of “fairness” in their framing. This framing was adopted by many subsequent works that developed fair classifiers, meaning the models were expected to have access to the subjects' protected attributes in order to optimize the models' fairness constraint <cit.>. These models optimize a variety of different fairness goals, such as subgroup fairness <cit.>, equalized odds/equal opportunity <cit.>, and disparate impact <cit.>. <cit.> evaluated many of these models and showed that, in general, they are able to achieve high predictive performance and meet their fairness criteria—as long as ground truth protected attribute data is available. §.§ Impracticality of Fairness Through Awareness Unfortunately, in real-world scenarios, high quality data about the protected attributes of individuals may not be available. In some contexts, such as insurance or lending, businesses are legally prohibited from collecting such data <cit.>. In other contexts, like screening of job applicants, people may be reluctant to divulge their demographics out of a justified fear that this information will be misused to discriminate against them. Alternatively, in cases where datasets are “found”—by scraping the web <cit.>—individual's protected attributes are typically not available and are costly to reproduce through annotation, especially when the datasets are massive. In the absence of protected attribute data, practitioners often resort to using third-party models or proxies to infer demographic attributes. These third-party proxy models have been shown to be biased against minority groups <cit.>, which can lead to the degradation of the fairness performance of such models <cit.>. Additionally, reliance on demographic inference may make models vulnerable to adversarial ML attacks that subvert fairness performance <cit.>. In summary, the combination of demographic inference and demographic aware fair models often leads to more harm than good. §.§ Fairness Through Unawareness One potential solution to the problems caused by the absence of high quality demographic attributes in training data for fair classifier are novel algorithms that do not rely protected attributes at all. Instead, these algorithms aim to achieve distributional fairness by incorporating the error rate of demographic attributes in the fair classifier optimization process itself <cit.>, or by using the latent representations of the training data <cit.>. It is the latter two algorithms that are the focus of our study. § ALGORITHMS AND METRICS In this section, we introduce the 14 classifiers that we evaluated in this study and the metrics we used to evaluate them. §.§ Classifiers We group the classifiers that we evaluated in this study into four classes: (1) unconstrained classifiers that solely optimize for accuracy; (2) classical fair classifiers that require access to protected attributes at training (and sometimes testing) time, and assume that this data are accurate; (3) noise-tolerant fair classifiers that also require access to protected attributes but account for uncertainty in the data; and (4) demographic-unaware fair classifiers that jointly optimize for accuracy and fairness but without access to any protected attribute data. The set of classifiers we have selected is not exhaustive. Instead, we aim to include representative classifiers from the various types of approaches that exist within each class. We discuss the classifiers from each class that we selected for our study below, with further details on related approaches in each subsection. §.§.§ Unconstrained Classifiers We chose two classifiers that do not have any fairness constraints, they only aim to maximize predictive accuracy. * Logistic Regression (LR) is the simplest classifier we evaluate. While LR is demographic-aware because it takes all features (including protected attributes) as model inputs at both train and test time, it is not designed to achieve any fairness criteria. * Random Forest (RF) is an ensemble method for classification built out of decision trees. Like LR, we train RF classifiers on all input features including protected attributes. §.§.§ Classical Fair Classifiers We chose seven classifiers from the literature that take protected attributes as input and attempt to achieve demographic fairness. These classifiers vary with respect to how they implement fairness, by pre-processing data, in-process during model training, or by post-processing the trained model. In particular, there exist many techniques for fairness optimization in this class, such as: reweighting of samples via group sizes <cit.> or via mutual independence of protected and unprotected features in the latent representations <cit.>, adding fairness constraints during the learning process <cit.>, or by changing the output labels to match some fairness criterion <cit.>. The seven classifiers we choose below are representative of these different approaches. * Sample Reweighting (SREW) is a pre-processing technique that takes each (group, label) combination in the training data and assigns rebalanced weights to them. The goal of this procedure is to remove imbalances in the training data, with the ultimate aim of ensuring fairness before the classifier is trained <cit.>. * Learned Fair Representation (LFR) is a pre-processing technique that converts the input features into a latent encoding that is designed to represent the training data well while simultaneously hiding protected attribute information from the classifier <cit.>. * Adversarial Debiasing (ADDEB) is an in-process technique that trains a classifier to maximize accuracy while simultaneously reducing an adversarial network's ability to determine the protected attributes from the predictions <cit.>. * Exponentiated Gradient Reduction (EGR) is an in-process technique that reduces fair classification to a set of cost-sensitive classification problems, essentially treating the main classifier itself as a black box and forcing the predictions to be the most accurate under a given fairness constraint <cit.>. In this case, the constraint is solved as a saddle point problem using the exponentiated gradient algorithm. * Grid Search Reduction (GSR) uses the same set of cost-sensitive classification problems approach as EGR, except in this case the constraints are solved using the grid search algorithm <cit.>. * Calibrated Equalized Odds (CALEQ) is a post-processing technique that optimizes the calibrated classifier score output to find the probabilities that it uses to change the output labels, with an equalized odds objective <cit.>. * Reject Option Classifier (ROC) is a post-processing technique that swaps favorable and unfavorable outcomes for privileged and unprivileged groups around the decision boundaries with the highest uncertainty <cit.>. Note that the CALEQ and ROC algorithms have access to protected attributes at both train and test time, while the other classifiers only have access to protected attributes at training time. §.§.§ Noise-tolerant Fair Classifiers We chose three classifiers from the literature that take protected attributes as input and attempt to achieve demographic fairness even in the presence of noise. Other than the three classifiers that we chose, we are aware of only one other approach: by <cit.>, who suggests using de-noised constraints to achieve near-optimal fairness.[<cit.>'s source code only supported Statistical Parity and False Discovery constraints, not EOD, which is why we omitted their classifier from our analysis.] * Modified Distributionally Robust Optimization (MDRO) by <cit.> is an extension of the Distributionally Robust Optimization (DRO) algorithm <cit.> that adds a maximum total variation distance in the DRO procedure. By assuming a noise model for the protected attributes, it aims to provide tighter bounds for DRO. * Soft Group Assignments (SOFT), also by <cit.>, is a theoretically robust approach that first performs “soft” group assignments and then performs classification, with the idea being that if an algorithm is fair in terms of those robust criteria for noisy groups, then they must also be fair for true protected groups <cit.>. * Private Learning (PRIV) is an approach by <cit.> that uses differential privacy techniques to learn a fair classifier while having partial access to protected attributes. The approach requires two steps. The first step is to obtain locally private versions of the protected attributes (like <cit.>). Second, following <cit.>, PRIV tries to create a fair classifier based on the private attributes. For this study, we select the privacy level hyperparameter to be a medium value (zero). §.§.§ Demographic-unaware Fair Classifiers We chose two classifiers from the literature that attempt to achieve fairness without taking protected attributes as input. * Adversarially Reweighted Learning (ARL) harnesses non-protected attributes and labels by utilizing the computational separability of these training instances to divide them into subgroups, and then uses an adversarial reweighting approach on the subgroups to improve classification fairness <cit.>. * Distributionally Robust Optimization (DRO) is an algorithm that attempts to minimize the worst case risk of all groups that are close to the empirical distribution <cit.>. In the spirit of Rawlsian distributive justice, the algorithm tries to control the risk to minority groups while being oblivious to their identities. These two classifiers operate under similar principles: they both try to reduce the gap in errors between protected groups by reducing the classification errors between latent groups in the training set. They do however have one difference: while DRO just increases the weights of the training examples that have higher errors, ARL trains an auxillary adversarial network to identify the regions in the latent input space that lead to higher errors and tries to equalize them, a phenomenon <cit.> call computational identifiability. §.§ Evaluation Metrics To compare the above 14 classifiers head-to-head, we studied their predictive power and their ability to achieve a fairness condition. We also measured the stability of these quantities when noise in the protected attributes was and was not present (described in <ref>). To assess predictive performance we computed accuracy, defined as: Accuracy = number of correct classifications/test dataset size. Accuracy is continuous between zero and one with the ideal value being one, which indicates a perfectly predictive classifier. Many measures of fairness exist in the literature <cit.>. For the purposes of this study, however, we needed to choose a metric that is supported by all the 14 classifiers so that our comparison is apples-to-apples. The classical and noise-tolerant fair classifiers have support for achieving any user-specified fairness constraint, while the demographic-unaware fair classifiers try to minimize the gap in utility between the protected groups. Based on this limitation, and for the sake of brevity, we choose the Average Odds Difference between two demographic groups as our fairness metric, and subsequently choose Equal Odds Difference (EOD) over both groups as our regularization constraint for the classical and noise-tolerant fair classifiers. EOD is defined as: EOD = (FPR_unpriv - FPR_priv)+(TPR_unpriv - TPR_priv)/ 2 where TPR is the true positive rate and FPR is the false positive rate. Priv and Unpriv denote the privileged and unprivileged groups, respectively. The ideal value of EOD is zero, which indicates that both groups have equal odds of correct and incorrect classification by the trained classifier. In this study, when we evaluate fairness, we do so for binary sex attributes. We adopted this approach because the datasets we use in our evaluation all include this attribute (see <ref>) and four classifiers in our evaluation (CALEQ, ROC, EGR, GSR) only support fairness constraints over two groups. Whenever necessary, we consider males to be the privileged group and females to be the unprivileged group. Note that optimizing for fairness between two groups is the simplest scenario that fair classifiers will encounter in practice—if they perform poorly on this task, then they are unlikely to succeed in more complex scenarios with multiple, possibly intersectional, groups. § METHODOLOGY In this section, we describe the approach we used to empirically evaluate the 14 classifiers that we chose for our study. §.§ Case Studies To observe how the classifiers perform on real-world data we chose four different datasets. The classification tasks are described below. Each dataset had binary sex as part of the input features. * Public Coverage <cit.>. The task is to predict whether an individual (who is low income and not eligible for Medicare) was covered under public health insurance. We used census data from California for the year 2018. * Employment <cit.>. The task is to predict whether an individual (between the ages of 16 and 90), is employed. For this task too, we looked at census data from California for the year 2018. * Law School Admissions <cit.>. The task is to predict whether a student was admitted to law school. * Diabetes <cit.>. The task is to predict whether a diabetes patient was readmitted to the hospital for treatment after 30 days. For each of these case studies, we split the dataset into train and test sets in an 80:20 ratio, trained every classifier on the same training set, and then used the trained classifiers to generate predictions on the same testing set. We verified via two-tailed Kolmogorov–Smirnov tests <cit.> and Mann–Whitney U tests <cit.> that the test set distribution for every feature was the same as the training set distribution. Finally, we calculated the metrics in <ref> on these predictions and compared the results from each classifier head-to-head. We repeated this procedure ten times to assess the stability of accuracy and EOD for each classifier. §.§ Synthetic Noise While studying the performance of these classifiers on a variety of real-world datasets is important, in order to get a more thorough understanding of the theoretical fairness and predictivity limits of the classifiers we subjected them to robust synthetic stress tests. As discussed in <ref>, in the real world, practitioners may not have access to the protected attribute information of people in their dataset. As a result, practitioners may use inference tools to find proxies for protected attributes, which can lead to unexpected, unfair outcomes <cit.>. To characterize what might happen in such a scenario, we perform the following synthetic experiments: * For each dataset, with a given probability (ranging from 0.1 to 0.9), we randomly flip the protected attribute labels (binary sex in this case) in the dataset. We refer to this probability value as noise. * With the synthetically generated dataset from Step 1, we then proceed to split the dataset 80:20, train all 14 algorithms on the same training set, and then calculate predictions on the same test set. The noisy (flipped) labels are passed as inputs to the classifiers at this step. * Next, with the predicted outcomes from Step 2, we calculate accuracy and EOD. Note that we calculate EOD with the true protected attributes, we measure the output bias in terms of the original sex labels from the given dataset. * We repeat Steps 1–3 ten times for each value of noise, to ensure statistical fairness and assess the stability of our metrics per classifier. <ref> shows the fraction of females in the noised datasets at each level of noise. The fraction of females goes up or down with noise depending on what the true fraction of females in the different datasets were to begin with. §.§ Calculating Feature Importance To help explain the variations in performance that we observed in our results, we calculated feature importance for each of our trained models. Although there are several black-box model explanation tools in the research literature—such as LIME <cit.>, SHAP <cit.>, and Integrated Gradients <cit.>—we required an explanation method that was model agnostic. The method that we settled on was KernelShap.[<https://shap-lrjball.readthedocs.io/en/latest/generated/shap.KernelExplainer.html>] According to the documentation, KernelShap uses a special weighted linear regression model to calculate local coefficients, to estimate the Shapley value (a game theoretic concept that estimates the individual contribution of each player towards the final outcome). As opposed to retraining the model with every combination of features as in vanilla SHAP, KernelShap uses the full model and integrates out different features one by one. It also supports any type of model, not just linear models, and was thus a good candidate for our study. <ref> shows an example distribution of feature importances calculated for the LR algorithm when trained on the Public Coverage dataset at noise level zero (no noise). In a similar fashion, we used KernelShap to calculate feature importance values for trained classifier outputs at noise levels 0, 0.2, 0.4, 0.6 and 0.8 for all 14 models. Research by <cit.> has shown that different explanation methods often do not agree with each other. We do not claim that the feature importances we calculated using KernelShap are guaranteed to agree with those produced by other tools. Nonetheless, we are specifically interested in the relative importance of the sex feature towards the final outcome as compared to the other input features. Shapley value-based explanations give us a reasonable sense of relative feature importance, as has been empirically shown in previous work <cit.>. § RESULTS In this section, we present the results of our experiments. We begin by examining the baseline performance of the 14 classifiers when there is no noise, followed by their performance in the presence of synthetic noise. Finally, we delve into feature importance explanations to help explain the relative performance characteristics of the classifiers. §.§ Baseline Characteristics <ref>(a–d) shows the accuracy and fairness outcomes for all 14 classifiers when there was no noise in the datasets. We executed each classifier ten times without fixing a random seed and present the resulting distributions of metrics using violin plots. We observe that most of the classifiers achieved comparable accuracy to each other on each dataset, and that most classifiers exhibited stable accuracy over the ten executions of the experiments. Learned Fair Representation (LFR), Soft Group Assignment (SOFT), and Distributed Robust Optimization (DRO) were the exceptions: the former two exhibited unstable accuracy on all four datasets, the latter on two datasets. As shown in <ref>(e–h), EOD was considerably more variable over runs than accuracy. The unconstrained classifiers (LR and RF) were relatively stable and, in some cases, achieved roughly equalized odds (on the Law School and Diabetes datasets). The classical fair classifier group contained the two least fair classifiers in these experiments (CALEQ and ROC), while the other pre-processing and in-processing algorithms performed relatively better. Adversarial Debiasing (ADDEB) was slightly unstable but the distribution centered around zero. Among the noise-tolerant fair classifiers, Soft Group Assignment (SOFT) was unstable on three out of four datasets, while the other two classifiers (MDRO and PRIV) were relatively more stable and more fair. The two demographic-unaware fair classifiers (ARL and DRO) were unstable on the Public Coverage dataset (<ref>e) and did not achieve equalized odds on the Employment dataset (<ref>f). However, ARL and DRO were stable and fair on the remaining two datasets. In summary, we observe that the accuracy and fairness performance of these classifiers was dependent on the dataset that they are trained and tested on, there was no single best classifier. Additionally, we can see that several classifiers are consistently unstable, which explains some of the results that we will present in the next section. §.§ Characteristics Under Noise Next, we present the results of experiments where we added noise to the protected attribute of the datasets. We added noise in increments of 0.1 starting from 0.1 and ranging up to 0.9. We added a given amount of noise to each dataset ten times and repeated the experiment, thus we plot the average values of accuracy and EOD for each classifier at each noise level. <ref>(a–d) shows the accuracy of the 14 classifiers' outputs as we varied noise. We observe that the MDRO, SOFT, and LFR classifiers had poor accuracy across all datasets and noise levels, while the DRO classifier had poor accuracy in two out of the four datasets. These observations mirror those from <ref>, these classifiers exhibited poor average accuracy in the noisy experiments because they were unstable in general. The other classifiers tended to be both accurate and stable, irrespective of noise. As shown in <ref>(e–h), the EOD results were much more complex than the accuracy results. ROC generated unfair outputs over all four datasets, at every noise level. Its companion post processing algorithm, CALEQ, exhibited rising EOD with noise for the Public Coverage dataset (<ref>e) and falling EOD for the Employment and Diabetes datasets (<ref>f, h).[Note that a higher value of EOD (<ref>) signifies that females received more positive predictions than males.] The unconstrained classifiers (LR and RF) moved in the same direction for every dataset, either rising (<ref>e, f) or falling (<ref>h) with noise. The SOFT classifier also exhibited some variable behavior: on the Employment dataset EOD rose with noise (<ref>f), and on the Public Coverage (<ref>e) dataset it failed to achieve equal odds at higher noise levels. The remaining classifiers tended to achieve equal odds irrespective of the noise level. <ref> only depicts average values for accuracy and EOD, which is potentially problematic because it may hide instability in the classifiers' performance. To address this we present <ref> in the Supplementary Material, which shows the distribution of accuracy and EOD results for each classifier on each dataset at the 0.1, 0.5, and 0.9 noise levels. We observe that, overall, no classifier became consistently less stable as noise increased. Rather, the stability patterns for each classifier mirrored the patterns that we already observed in <ref>. In summary, the classifiers that had problematic performance in the baseline experiments (see <ref>) continued to have issues in the presence of noise. Additionally, the unconstrained classifiers exhibited inconsistent fairness as noise varied. Surprisingly, the noise-tolerant classifiers did not uniformly outperform the other fair classifiers. §.§ Feature Importance Finally, we delve into model explanations as a means to further explore the root causes of the classifier performance characteristics that we observed in the previous sections. First, we calculated feature explanations using KernelShap for every classifier at five noise levels—0, 0.2, 0.4, 0.6 and 0.8—using the method we described in <ref>. Next, we averaged the explanation distributions for each classifier to form a feature importance vector per classifier. Finally, we repeated this process for each dataset. For each dataset, we calculated Wasserstein distances <cit.> between the feature explanation distributions for each algorithm pair and present the results in <ref>. Additionally, we plot the rank of the sex feature in terms of mean absolute feature importance for each classifier and present the results in <ref> (we also show the range of ranks if they vary over noise). <ref> reveals that, with few exceptions (EGR in Public Coverage, EGR and GSR in Employment, EGR and ROC in Law school, and CALEQ, PRIV and ARL in Diabetes), most classifiers had similar feature explanation distributions. We do not observe any clear patterns among the exceptional classifiers, no classifier consistently diverged from the others across all datasets. Further, we do not observe clear correlations between accuracy, EOD, and feature distribution similarity, suggesting that different classifiers took different paths to reach the same levels of performance. <ref> is more informative than <ref>. Four of the classifiers that exhibited consistently poor performance—LFR, MDRO, and SOFT (<ref>a–d), and ROC (<ref>e–h)—learned to weight the sex feature higher than other features, which may point to the root cause of their accuracy and fairness issues. Similarly, the unconstrained classifiers (LR and RF) exhibited changing EOD with noise levels in three out of four datasets (<ref>e, f, h), but not for Law School Admissions (<ref>g), and we observe that they learned a relatively low weight for sex among the available features for the Law School dataset. CALEQ also learned a relatively low weight for sex on the Law School dataset and was subsequently unaffected by noise (<ref>g), but showed variable trends in EOD for the other three datasets (<ref>e, f, h) on which it learned a relatively higher weight for sex. Sex was the lowest ranked feature for the two demographic-unaware fair classifiers (DRO and ARL), which makes sense because they were not given these features as input. EGR and GSR also did not have access to sex while classifying the test dataset, so they also had sex as the lowest ranked feature. §.§ Fairness-Accuracy Tradeoff Three algorithms in our list - EGR, GSR, and PRIV, provide a mechanism to control the fairness-accuracy tradeoff via a hyperparameter – namely fairness violation eps in the case of EGR and GSR <cit.>, and the privacy level ϵ in the case of PRIV <cit.>. Based on the experiments the authors of these algorithms did in their papers, we used different eps values between 0.01 and 0.20 and ϵ values between -2 and 2 and reran our experiments. We found that tweaking the tradeoff hyperparameter did not contribute meaningfully to the stability and noise resistance capabilities of these algorithms. Consequently we omit these results from the paper. § CONCLUSION In this study, we present benchmark results—in terms of accuracy, fairness, and stability—for 14 ML classifiers divided into four classes. We evaluated these classifiers across four datasets and varying levels of random noise in the protected attribute. Overall, we found that two classical fair classifiers (SREW and EGR), one noise-tolerant fair classifier (PRIV), and one demographic-unaware fair classifier (ARL) performed consistently well across metrics on our experiments. In the future we recommend that ML researchers benchmark their own fair classifiers against these classifiers and that practitioners consider adopting them. One surprising finding of our study was how well SREW and EGR performed in the face of noise in the protected attribute. Contrast this to noise-tolerant classifiers like MDRO—whose performance did not vary with noise but was inaccurate on some datasets—and SOFT—which was consistently inaccurate and had variable fairness in the face of noise. These results suggest that some classical fair classifiers may actually fare well in the face of noise, and that adopting more complex noise-tolerant fair classifiers may not always be necessary. Another surprising finding of our study was how well ARL performed. As a demographic-unaware fair classifier it did not have access to the sex feature at training or testing time, yet it achieved fairness performance that was comparable to demographic-aware fair classifiers on three of our datasets, and its fairness performance was noise invariant on three datasets as well. We fit linear regression models on each dataset with sex as the independent variable, but these models did not uncover any obvious proxy features for ARL to use in place of the sex feature. This speaks to the strength of the ARL algorithm's adversarial approach to learning. On one hand, our results confirm that demographic-unaware fair classifiers can achieve fairness for real-world disadvantaged groups under ecological conditions. This is positive news for practitioners who would like to adopt a fair classifier but lack (high-quality) demographic data. On the other hand, we still urge caution with respect to the adoption of demographic-unaware fair classifiers for practical reasons. First, determining whether a classifier like ARL will achieve acceptable performance in a given context requires thorough evaluation on a dataset that includes demographic data, as we have done here. Second, even if a demographic-unaware fair classifier performs well in testing, its performance may degrade after deployment if the context changes or there is distribution drift <cit.>. Monitoring the health of a classifier like ARL in the field requires demographic data. In short, adopting a demographic-unaware classifier does not completely obviate the need for at least some high-quality demographic data. In general, the results of our study point to the need for further development in the areas of noise-tolerant and demographic-unaware fair classifiers. By releasing our source code and data, we hope to provide a solid foundation for evaluating these novel classifiers in the future. Our study has several limitations. First, we only evaluate classifiers using binary protected attributes. It is unclear how their performance and consistency would change under more complex conditions. That said, we are confident that the classifiers that performed poorly will continue to do so in the presence of more complex fairness objectives. Second, our case studies and synthetic experiments, while thorough, are by no means completely representative of all real world datasets and contexts. We caution that our results should not be generalized indefinitely. Third, we did not evaluate all of the classical fair classifiers from the literature (see <cit.> and <cit.> for more). Our primary focus was on adding to the literature by benchmarking noise-tolerant and demographic-unaware fair classifiers. Finally, in this study we only evaluated one fairness metric—EOD—because it was the common denominator among all of the classifiers we selected. Future work could explore fairness performance further by choosing other fairness metrics along with subsets of amenable classifiers. We thank the anonymous reviewers for their helpful comments. We also thank Jeffrey Gleason for notes on the manuscript. This research was supported in part by 1NSFhttps://www.nsf.gov/ grant 1IIS-1910064. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the NSF. ACM-Reference-Format § SUPPLEMENTARY MATERIAL
http://arxiv.org/abs/2307.01597v1
20230704093838
Bridge the Performance Gap in Peak-hour Series Forecasting: The Seq2Peak Framework
[ "Zhenwei Zhang", "Xin Wang", "Jingyuan Xie", "Heling Zhang", "Yuantao Gu" ]
cs.LG
[ "cs.LG", "cs.AI" ]
Department of Electronic Engineering, Tsinghua University Beijing China [email protected] Department of Electronic Engineering, Tsinghua University Beijing China [email protected] Department of Electronic Engineering, Tsinghua University Beijing China [email protected] Department of Electronic Engineering, Tsinghua University Beijing China [email protected] Department of Electronic Engineering, Tsinghua University Beijing China [email protected] Peak-Hour Series Forecasting (PHSF) is a crucial yet underexplored task in various domains. While state-of-the-art deep learning models excel in regular Time Series Forecasting (TSF), they struggle to achieve comparable results in PHSF. This can be attributed to the challenges posed by the high degree of non-stationarity in peak-hour series, which makes direct forecasting more difficult than standard TSF. Additionally, manually extracting the maximum value from regular forecasting results leads to suboptimal performance due to models minimizing the mean deficit. To address these issues, this paper presents Seq2Peak, a novel framework designed specifically for PHSF tasks, bridging the performance gap observed in TSF models. Seq2Peak offers two key components: the CyclicNorm pipeline to mitigate the non-stationarity issue, and a simple yet effective trainable-parameter-free peak-hour decoder with a hybrid loss function that utilizes both the original series and peak-hour series as supervised signals. Extensive experimentation on publicly available time series datasets demonstrates the effectiveness of the proposed framework, yielding a remarkable average relative improvement of 37.7% across four real-world datasets for both transformer- and non-transformer-based TSF models. [b]0.705 < g r a p h i c s > []Original time series and peak-hour series [b]0.29 < g r a p h i c s > []PHSF paradigms The peak-hour series forecasting task and its paradigms Bridge the Performance Gap in Peak-hour Series Forecasting: The Seq2Peak Framework Yuantao Gu August 1, 2023 ================================================================================== § INTRODUCTION The variations in peak-hour values of time series within daily cycles hold significant implications across numerous application domains. A case in point is the telecommunications sector, where engineers regularly calibrate base station capacities to coincide with maximum traffic volumes, ensuring the optimal communication quality <cit.>. In the energy sector, daily peak power consumption critically determines the provisioning of raw materials and the production of electrical power <cit.>. Some utility companies even adopt a billing structure predicated on individual peak demand. Likewise, understanding peak-hour traffic patterns in the transportation sector is crucial to effective urban planning <cit.>. Hence, precise Peak-hour Series Forecasting (PHSF) is significant for diverse industries, underlining the necessity of focused research in this area. Despite the importance of PHSF, the current body of research in this area is relatively underdeveloped. The existing researches typically handle PHSF as a conventional sequence-to-sequence problem <cit.>, disregarding the crucial interplay between the peak-hour series and the overarching original series. These researches often fail to meet practical requirements regarding predictive scope and accuracy. In light of these shortcomings in current PHSF methodologies, the recent developments in TSF techniques provide a potentially promising pathway to explore. Recently, there has been a notable surge in research concentrating on Time Series Forecasting (TSF), resulting in substantive advancements in methodologies such as LogTrans <cit.>, Informer <cit.>, Autoformer <cit.>, and DLinear <cit.>. While these techniques are robust in tackling multivariate long-term TSF issues, the empirical studies indicate their limitations when directly transposed to PHSF. They struggle with the less abundant information typically embedded in peak-hour values and their weak sequential correlations. Consequently, this scenario highlights the necessity for an innovative methodology explicitly devised for peak-hour series forecasting. We identify three paradigms in applying standard TSF methodologies to PHSF tasks (Figure <ref>). The first paradigm (PFP) solely relies on historical peak-hour series to forecast future peak-hour series, but it often underperforms due to the lower Autocorrelation Function (ACF) <cit.> compared to the full series (Figure <ref>). Research has demonstrated a positive correlation between ACF and data predictability <cit.>. The second paradigm (SFP) employs full series to forecast peak-hour series but faces challenges similar to the first method. The third paradigm (SFS) incorporates full series to forecast full series and manually extracts the maximum value of each day, but its ability to predict extreme values is compromised due to the loss function tends to minimize the average error. Therefore, for effective peak-hour series forecasting, it is necessary to: 1) Tackle the challenges associated with a wide forecast span and poor temporal dependencies and 2) Leverage the relationship between the peak-hour series and the original series. Based on our analysis and insights from recent research, we introduce Seq2Peak, a robust framework that bridges the gap between general time series and peak-hour series forecasting. Seq2Peak consists of two primary components: the Cyclic Normalization and the Peak-hour Decoder. The former models inter-cycle hourly relationships, introducing an innovative approach to extracting statistical measures specifically tailored for PHSF. The latter takes complete historical series as input, simultaneously outputting the original and peak-hour series. A hybrid loss function ensures learning priority is given to the mapping relationship between the original and peak-hour sequences. Effortlessly integrating into various forecasting models, Seq2Peak enhances peak-hour forecasting without considerably increasing computational complexity. Its efficacy is validated by significant performance improvement across multiple models and four real-world datasets, further supported by comprehensive ablation studies and parameter experiments. Our work heralds a new chapter in PHSF, paving the way for future researchers in this significant yet under-researched field. The contributions of this paper are summarized as follows: 1) We systematically introduce the task of peak-hour series forecasting, an essential yet under-researched problem, discussing its challenges and underlying causes. 2) We propose Seq2Peak, a novel PHSF framework that can be effortlessly integrated into and generalized across most existing forecasting models, addressing the identified challenges and unlocking the potential of deep models in peak-hour series forecasting. 3) We validate our framework through extensive testing on four real-world datasets, demonstrating a 37.7% average relative improvement compared to the original TSF models. § RELATED WORKS Peak-hour series forecasting: Current forecasting methods primarily rely on traditional TSF techniques. Various studies have applied traditional TSF methods to predict peak time, including <cit.>, which uses LSTM<cit.> to forecast the peak load of the current week, and <cit.>, which compares the performance of ARIMA<cit.>, SVR<cit.>, LSTM on power load data. Similarly, <cit.> utilizes ARIMA and LSTM, among other methods, to predict traffic patterns clustered for each base station during a week. <cit.><cit.> introduces Bi-LSTM to forecast the day-ahead peak electricity. <cit.> compares the performance of CNN and LSTM on peak-load data. <cit.> and <cit.> use linear regression and ARIMA, respectively, to model the relationship between the peak load of the next day and typical load patterns identified through clustering. However, with their sole reliance on historical peak time information and traditional time series prediction models, these methods often result in poor prediction performance. Time series forecasting: In recent years, innovative sequence processing models like the Transformer have outperformed traditional models like ARIMA and LSTM. Yet, their application in the domain of PHSF remains limited. Noteworthy developments in this area include Autoformer <cit.> which integrates series autocorrelation into the Transformer architecture. Informer <cit.> introduces sparsity into the attention mechanism to reduce the complexity of the Transformer. Additionally, <cit.> proposes DLinear, linear prediction models that perform comparably to the Transformer but are significantly smaller in scale. Unfortunately, our experiments indicate that directly applying these models to PHSF tasks yields unsatisfactory results. However, integrating these models with the proposed Seq2Peak framework can more effectively harness their potential, significantly advancing PHSF tasks. § METHODOLOGY Considering the limited research conducted on PHSF tasks, we formally define these tasks in <ref>. After that, we introduce the Seq2Peak framework, as depicted in Figure <ref>, a pioneering approach that we propose to address the problem of PHSF. The framework comprises two fundamental components: the Cyclic Normalization mechanism and the Seq2Peak decoder, which work in unison to predict the peak-hour series based on historical data. §.§ Problem Formulation In this section, we formally address the definition of the Peak-hour Series Forecasting task. We define a historical input data window of length N, represented as X_t={x_t,x_t+1,⋯,x_t+N-1} (t is omitted hereafter), where x_i∈ℝ^c and c represents the number of channels. The objective is to forecast the peak-hour series Y^peak derived from the original future series Y={x_t+N,⋯,x_t+N+M-1}. The peak-hour value is the maximum value downsampled from an interval of T=24h consecutively for each time series channel. Consequently, we obtain Y^peak∈ℝ^M/T× c by: y^peak_i,j=max{y_(i-1)T+1,j,y_(i-1)T+2,j,...,y_iT,j}, where j denotes the channel index. Notably, peak-hour values for different channels may occur at different hours. A PHSF model performs as a function to predict the future peak-hour series Y^peak based on the historical full series X, f(·):ℝ^N × c→ℝ^M/T× c. The predicted output is denoted as Ŷ^peak=f(X). §.§ Cyclic Normalization The requirement for a longer time span and less self-correlation are the main characteristics of PHSF tasks. Our study addresses these concerns by introducing the Cyclic Normalization (CyclicNorm) pipeline, which aims to learn data correlation and generate features more appropriate for PHSF tasks. CyclicNorm achieves this by modeling the correlation of distribution across varying hours within a cyclical interval. First, to enable forecasting models to learn the intrinsic correlations between different hours within the same cycle, we perform CyclicNorm(·) to the raw input time series to separate statistics representing non-stationary content from the normalized series. The mathematical expression is represented as follows: {X'^(i),μ_i,σ_i} = Norm(X^(i)). X^(i)= {x_i,x_i+T, x_i+2T,...}, 1 ≤ i ≤ T is the sub-sequence of the input X where i is the index of the hour in a day. We perform Norm(·) to the 24 sub-sequences individually by extracting sample means and dividing by standard deviations, obtaining two sets of statistics. M={μ_1,μ_2,...,μ_T} is the set of means, where μ_i ∈ℝ^c. Σ={σ_1,σ_2,...,σ_T} is the set of standard deviations, where σ_i ∈ℝ^c. The normalized X'^(i) is utilized as the input of the forecasting model to learn more innate and stable data correlation. The second stage of CyclicNorm further explores techniques to handle the statistics representing non-stationary input sequence information. Enlightened by prior works <cit.>, we provide two candidate modules to tackle two corresponding issues. Non-stationary shifting is utilized to overcome the distribution shifting of time series originating from the longer time span of PHSF tasks. The cyclic projector is employed to mitigate the drawback of over-stationarity mentioned in <cit.>. Via further experiment and analysis, we discovered empirically that the processing of the statistics is highly flexible and dependent on the selection of the forecasting model and the statistical characteristic of the target dataset. Therefore we merely provide a guideline in this second stage. The detail and structure of the particular modules should be determined through practice alongside other restrictions and observation. The third stage of CyclicNorm is a denormalization of the results from the forecasting model using the post-processed statistics mentioned above. The output of CyclicNorm is regarded as a standard TSF result which also serves as an input for further peak-hour inferencing. §.§ Seq2Peak Decoder The objective of optimization for standard TSF tasks focuses on minimizing the mean forecasting deficit, which contradicts the task of peak-hour series forecasting. However, PHSF models with a direct optimization strategy over peak-hour values have been proven to have poor generalizing ability to the test set. To overcome this dilemma, our Seq2Peak Decoder provides a simple yet highly effective optimization strategy: to optimize the loss function of the original time series and its corresponding peak-hour series simultaneously. To execute this strategy without introducing more trainable parameters, we attach a max-pooling layer of a stride and kernel size of 24 at the end of the previous standard forecasting result. The distinction between the max-pooling operation and manually processing the peak-hour series is that max-pooling allows back-propagation. Thus, we optimize the parameters of PHSF models via the following hybrid loss function l_hy. l_hy=α l_seq+(1-α)l_peak, where l_seq is the MSE loss between the ground truth original time series and the output of the penultimate layer, l_peak is the MSE loss between the ground truth of peak-hour series and the final output of the Seq2Peak decoder. α is a weighting factor that varies between 0 and 1. By employing this decoder and corresponding loss function, forecasting models can achieve stronger generalization abilities, fully capturing the original series' information while emphasizing forecasting performance for peak hour series. § EXPERIMENTS §.§ Experimental Setup Datasets We evaluate our methods on four large-scale real-world time series datasets: ETTh1/2[https://github.com/zhouhaoyi/ETDataset], Electricity[https://archive.ics.uci.edu/ml/ datasets/ElectricityLoadDiagrams20112014], and Traffic[http://pems.dot.ca.gov]. These datasets have an hourly granularity and belong to the domains of energy and traffic. They exhibit daily periodicity, making the PHSF task meaningful in this context. Baselines We apply Seq2peak to transformer-based and non transformer-based TSF models (Transformer <cit.>, Informer <cit.>, Autoformer <cit.>, and DLinear <cit.>) to investigate performance enhancement. For baselines, we perform the SFS paradigm. We examine mean square error (MSE) and mean absolute error (MAE) as metrics. §.§ Main Results We first examine the four paradigms depicted in Figure <ref>, implemented on Transformer. The results of these experiments (Figure <ref>) demonstrate the superiority of Seq2Peak over the other three paradigms in accurately predicting future peak-hour series. SFS displayed the best performance among all paradigms, leading us to select it as a strong baseline for subsequent experiments. Furthermore, we applied our Seq2Peak framework to four commonly used forecasting models. Table <ref> compares the forecasting accuracy of the baselines and Seq2Peak. The results consistently show that Seq2Peak significantly outperforms all four baselines. Moreover, Seq2Peak demonstrates stable performance, contrasting sharply with the baselines which exhibit a high increase in error with the extension of the prediction length. These experiments affirm that the Seq2Peak framework can effectively enhance the accuracy and robustness of peak-hour series forecasting. §.§ Ablation Studies We delve deeper into the effectiveness of each module in the proposed framework. Ablation experiments are conducted using the Transformer and DLinear models on the ETT1 dataset. As shown in Figure <ref>, both CyclicNorm and Seq2Peak Decoder can enhance performance individually, and using them together yields even greater improvements. This demonstrates how these two modules, by focusing on different challenges and empowering the forecasting model from different perspectives, complement each other. In addition, we provide a hyper-parameter study for α in Eq.<ref> by plotting dynamic curves. As shown in Figure <ref>, we examine the performance on ETTh1 and ETTh2 datasets using Seq2Peak enhanced DLinear model. The tendency of the plot indicates that the best-performing α occurs around 0.5, which validates the necessity of applying the hybrid loss function. § CONCLUSION In conclusion, this paper addresses the crucial but often overlooked issue of Peak-hour Series Forecasting. We proposed Seq2Peak, a novel framework specifically tailored for PHSF tasks, effectively bridging the gap between conventional TSF methods and PHSF. Through extensive experiments, Seq2Peak has demonstrated significant performance improvements across various datasets for transformer- and non-transformer-based state-of-the-art TSF models. This study provides an effective solution to the PHSF problem and paves the way for further explorations in this critical and challenging field. This research was made possible through the generous financial support of Huawei Corporation. We gratefully acknowledge their contributions to the successful completion of this study. ACM-Reference-Format
http://arxiv.org/abs/2307.01336v1
20230703201439
The alpha particle charge radius, the radion and the proton radius puzzle
[ "F. Dahia", "A. S. Lemos" ]
hep-ph
[ "hep-ph", "gr-qc", "physics.atom-ph" ]
[email protected] Departament of Physics, Universidade Federal da Paraba Grande, Joo Pessoa - PB, Brazil [email protected] Departamento de Fsica, Universidade Federal de Campina Grande, Caixa Postal 10071, 58429-900 Campina Grande, Paraba, Brazil Recent measurements of the Lamb shift of muonic helium-4 ions were used to infer the alpha particle charge radius. The value found is compatible with the radius extracted from the analysis of the electron-helium scattering. Thus, the new spectroscopic data put additional empiric bounds on some free parameters of certain physics theories beyond the Standard Model. In this paper, we analyze the new data in the context of large extra-dimensional theories. Specifically, we calculate the influence of the radion, the scalar degree of freedom of the higher-dimensional gravity, on the energy difference between the 2S and 2P levels of this exotic atom. The radion field is related to fluctuations of the volume of the supplementary space, and, in comparison with the tensorial degrees of freedom, it couples to matter in a different way. Moreover, as some stabilization mechanism acts exclusively on the scalar degree of freedom, the tensor and scalar fields should be treated as phenomenologically independent quantities. Based on the spectroscopic data of muonic helium, we find constraints for the effective energy scale of the radion as a function of the alpha particle radius. Then, we discuss the implications of these new constraints on the proton radius puzzle. The alpha particle charge radius, the radion and the proton radius puzzle A. S. Lemos Received Month day, 2023; accepted June day, 2023 ========================================================================= § INTRODUCTION At the end of the last century, interest in extra-dimensional theories was renewed by braneworld theories. Their original formulation was proposed to explain the discrepancy between the scales of the electroweak and gravitational interactions <cit.>. According to these models, our ordinary universe is a 4-dimensional hypersurface embedded in a higher-dimensional space <cit.>. This geometric interpretation follows from the assumption that particles and fields of the Standard Model are trapped to the brane and unable to escape to extra dimensions, unless they were subjected to processes involving energy scales far beyond TeV scale. Thus, the apparent four-dimensionality of spacetime would be a consequence of the existence of confinement mechanisms that keeps the particles and fields stuck in the brane <cit.>. Gravity, in contrast, has access to extra dimensions at energies even below the weak scale. The spreading of the gravitational field in the additional directions would be the reason why gravity appears to be much weaker than the other interactions at large distances. In this way, the braneworld scenario would provide a simple and alternative explanation for the hierarchy problem. At the same time, these models also predict that the strength of the gravitational interaction is greatly magnified at small length scales. This is a very interesting feature because, due to this modification, the high dimensionality of spacetime could, in principle, have measurable effects on many phenomena that take place in the brane. This expectation has motivated numerous researches in several areas of physics (such as high-energy particles <cit.>) aimed at probing empirical signals of extra dimensions, by investigating the behavior of the gravitational field at short distances. In a submillimeter scale, for instance, the inverse-square law of gravity has been tested in laboratories using torsion balances. Usually, in these laboratories experiments, the higher-dimensional gravitational potential of a pointlike mass is parameterized by a power-law-like or Yukawa-like potentials <cit.> depending on the probed distance r from the source in comparison to the compactification radius R. According to torsion balance experiments, the extra dimension radius should satisfy the constraint R≤44 μ m <cit.>, when the theoretical deviation is expressed in the Yukawa parametrization. More recently, relying on the high precision now achieved in atomic transition measurements, hydrogen-like atoms have also been considered in the search for deviations of Newton's law of gravitation at the Angstrom scale <cit.> . In this regard and for the purpose of our later discussion, it is interesting to mention that transitions involving S-level are not computable in the thin brane model with two or more additional dimensions. This problem can be avoided in thick brane scenarios, where constraints for corrections of the inverse square law due to extra dimensions were obtained from the analysis of the hydrogen atomic energy spectrum <cit.>. In the thick brane framework, Standard Model particles are confined to the 3-brane, but their wave-functions extend somewhat in the transverse direction over a range of order of the brane thickness. In the thin brane models, the confinement is of delta type <cit.>. The thick brane model has also been applied to study muonic hydrogen spectroscopy in order to investigate the proton radius puzzle in the extra-dimensional scenario. The proton radius puzzle is the incompatibility in the measurement of the proton charge radius obtained from experiments involving electron-proton interaction and muonic atom spectroscopy <cit.>. This conundrum arose from the measurement of the proton radius extracted through the 2S-2P Lamb shift of muonic atoms <cit.> . Many proposed theoretical models have attempted to explain the discrepancy between the results, considering that this would be a possible indication of an additional force beyond the Standard Model interactions <cit.>. In the extra-dimensional thick brane scenario, according to Ref. <cit.>, the energy excess found in the measurement of the 2S-2P transition using muonic hydrogen spectroscopy could be accounted for by the modification of gravitational interaction. The muonic atom is obtained in laboratory by replacing the electron with a muon. As the muon is about 207 times heavier than the electron, the gravitational interaction has a much greater effect on the energy levels of this atom than on those of conventional hydrogen. In the thick brane scenario, this magnification can be very impressive. For example, the amplification factor can reach a figure of two billion in transitions involving the S-level <cit.>, since, in this scenario, this factor is proportional to the muon's gravitational mass multiplied by the atomic reduced mass raised to the 3rd power. Therefore, the spectroscopy of muonic atoms could be very useful in providing important constraints for modifications of gravity in the atomic domain. Recently, new data from muonic hydrogen-like atomic transitions have been obtained. Trying to shed light on the proton radius puzzle, precise measurements of the 2S-2P transitions in muonic helium-4 have been used to determine the ion charge radius of the α particle <cit.>. The results obtained are compatible with the value extracted from e-He scattering, and, thereby, the puzzle is not present here. In this work, we intend to discuss these new data in the context of the thick braneworld model. In general, the phenomenological viability of braneworld, such as the ADD model, also depends on the stabilization of the volume of the supplementary space <cit.>. To ensure this, an additional mechanism should act on this degree of freedom. From the brane perspective, the volume of the extra space can be viewed as a scalar degree of freedom of the higher-dimensional gravity. Its behavior is described by a scalar field known as the radion. Since the stabilization mechanism does not operate on the tensorial degrees of freedom of higher-dimensional gravity, it is recommendable to examine the effects of the radion and gravitons separately and to try to establish experimental bounds on each one independently whenever possible. Another important aspect to consider is that radions and gravitons couple to matter differently. In a pure ultra-relativistic regime, Standard Model fields on-shell would not produce radions. As a result, tests of extra-dimensional theories in high-energy colliders cannot probe the tensorial and scalar modes with the same accuracy in tree-level processes <cit.>. In this article, we examine the effects of the radion on the energy level of hydrogen-like atoms in the thick brane scenario. More specifically, in section II, the set up of the thick brane is described and the Hamiltonian of the nucleus-lepton interaction trough the radion exchange is explicitly determined. Using this Hamiltonian, we estimate, in section III, the corrections of the energy levels of the muonic helium due to the gravitational interaction between the muon and alpha nucleus mediated by the radion. The correction term depends on a free parameter of the model corresponding to an effective energy scale of the radion. By using recent data of the transition 2S-2P in the ionic muonic helium, we obtain experimental bounds for the effective energy scale of the radion as a function of the radius of the alpha particle. In the section IV, we reconsider the issue of the proton radius puzzle taking into account this new constraints and discuss whether these new bounds could exclude higher-dimensional gravity as a possible explanation for the puzzle. Finally, in Sect. V, we present our last remarks. § THE RADION AND THE ATOMIC LEVEL IN THICK BRANE SCENARIOS Alternative theories of gravity have been proposed in several different contexts <cit.>. In higher-dimensional scenarios, a straightforward modification of gravity is obtained by simply extending the Einstein-Hilbert action to the whole space-time, including the additional directions. In the ADD model <cit.>, for instance, the supplementary space has a topology of a torus T^δ with δ spacelike dimensions, and the action is given by: S_G=c^3/16π G_D∫ d^4xd^δz√(-ĝ) ℛ̂, where x denotes intrinsic coordinates of the brane and z represents coordinates of the extra space. The Lagrangian depends on the scalar curvature ℛ̂ of the ambient space and ĝ is the determinant of the spacetime metric (here, we are adopting the signature ( -,+,…,+)). The gravitational constant of the higher-dimensional space is G_D and c denotes the velocity of light in vacuum. According to the ADD model <cit.>, the background state is characterized by a flat spacetime that contains an extra space whose volume is given by ( 2π R) ^δ, where R denotes the compactification radius. In the presence of confined matter in the brane, the metric of the ambient spacetime will be determined by equations with the same form of the Einstein's equations. In the weak field limit, the higher-dimensional version of the linearized Einstein's equations can be written as □ĥ_AB=-16π G_D/c^4T̅_AB, for the tensor ĥ_AB=ĝ_AB-η_AB, which describes the perturbations of the geometry with respect to the Minkowski metric η_AB in the first order of G_D (the capital Latin indices run from 0 to 3+δ). The above equation is valid in coordinate systems where the condition ∂_A( h^AB-1/2η^ABh_C^C) =0 is satisfied. The operator □ corresponds to the D'Alembertian associated to the Minkowski metric and the source term is T̅ _AB=[ T_AB-(δ+2)^-1η_ABT_C^C], defined from T_AB, which is the energy-momentum tensor of the fields stuck in the brane. Due to the confinement, it is assumed that T_AB can be written as <cit.>: T_AB( x,z) =η_A^μη_B^νT_μν( x) f( z) , in a length scale greater than the thickness of the brane. Here, the Greek indices go from 0 to 3. In the above expression, T_μν( x) describes the effective distribution of energy and momentum of the confined fields along the brane, while the function f(z) is related to the profile of the fields in the transversal directions. In zero-width brane, f(z) would be a delta-like distribution. However, in a thick-brane model, f( z) would be some normalized distribution very concentrated around the brane. Now let us consider the solutions of equation (<ref>) in the ambient space whose topology is ℝ^3× T^δ. In this context, to take into account the compact topology of the extra dimensions, it is useful to describe the δ-torus as a quotient space of the Cartesian space ℝ^δ. Thus, the effect of the compact dimensions of the torus T^δ on solutions of (<ref>) can be simulated by means of mirror images of the source. The localization of these images in ℝ^δ is determined by the equivalence relation that defines the quotient space. So, it follows that the solution of the equation (<ref>) in the given topology for static sources can be written as: ĥ_AB( X) =Ĝ_D/c^4∑_i( ∫T̅_AB( X_i^') /| X-X_i^'| ^1+δd^3+δX_i^') , where, for the sake of simplicity, we write Ĝ_D=[16πΓ (δ+3/2)/(δ+1)2π^( δ+3) /2]G_D, with Γ as the gamma function. The variable X= ( x,z) represents the coordinates of points in the ambient space. For i=0, the coordinate X_i=0^' is the position vector of the real source inside the thick brane, and each X_i^' can be interpreted as the position vector of the source's mirror image i in the space ℝ^3+δ. At large distances from the source, the three-dimensional behavior of the gravitational field is recovered in the brane, i.e., the components ĥ_AB behave as | x| ^-1, for | x|≫ R <cit.>. But to reproduce the predictions of General Relativity, two additional conditions should be satisfied: the higher-dimensional gravitational constant should be related to the Newtonian constant G according to the formula G_D=( 2π R) ^δG <cit.>; and the volume of the supplementary space should be stabilized at long distance. This question is relevant here because fluctuations of the extra-space volume have influence on the asymptotic behavior of gravitational potential. In fact, particles confined to the brane couple to ĥ_μν, the induced metric in that hypersurface. But, in order to reproduce the General Relativity's predictions at large distances, particles should be effectively coupled to another tensor, let us say h_μν (without a hat), whose source term is not T̅_μν that appears in equation (<ref>), but, instead, is the reduced energy-momentum given by T̅_μν^( GR) ≡( T_μν-1/2η_μνT_γ^γ). The basic distinction between these two tensors is the coefficient multiplying the trace T_γ^γ. By comparing their respective sources term (T̅_μν and T̅_μν^( GR)), we can see that the difference between ĥ_μν and h_μν is proportional to η_μν. Based on these considerations, it is convenient to decompose the metric perturbation tensor as: ĥ_μν=h_μν+ϕη_μν, where h_μν is sourced by T̅_μν^( GR), while the field ϕ is sourced by the residual tensor T̅_μν-T̅_μν^( GR), which is proportional to η_μν. Taking into account the form of the energy-momentum tensor of confined fields (<ref>), we find that: ϕ=Ĝ_D/c^4δ/2( δ+2) ∑_i∫T_γ^γ( x^') f( z_i^') /|X⃗-X⃗_i^'| ^1+δd^3+δX_i^'. This field, which is a scalar quantity under brane coordinate transformations, is related to the trace in transversal directions of the metric perturbation. Indeed, from equations (<ref>) and (<ref>), we can see that ϕ=ĥ_â^â/2 (where the index â refers to the extra directions). Therefore, it describes fluctuations of the extra-space's volume around its background value ( 2π R) ^δ. As the volume can be expressed in terms of the compactification radius, ϕ is called the radion field <cit.>. At this point it may be useful to mention that in some references only the zero-mode oscillation of the scalar field is called radion. But here we are calling radion the field given by the equation (<ref>) which also depends on the extra-dimensional coordinates. As we have already mentioned, to recover the known behavior of gravity at large length scale, it is necessary that ĥ_μν tends to h_μν in the limit | x|≫ R . This condition demands that ϕ should be suppressed asymptotically. Usually, this is achieved by some mechanism that adds mass to the radion field <cit.>. In this case, far from the source, the massive radion field goes to zero exponentially, implying that the volume of the supplementary space stays stable around the background value. Therefore, to be consistent with observational data, extra-dimensional theories, such as the ADD model, need an additional theoretical ingredient that provides stabilization for the supplementary space volume. Beside this, some models propose the existence of new self-interacting scalar fields that inhabit the bulk <cit.>. These scalars fields of Brans-Dicke type will couple to the radion field, influencing its behavior. For these reasons, it is interesting to treat ϕ and the tensor h_μν as independent fields from the phenomenological point of view and to investigate the supposed effects of these fields separately on each experiment whenever is possible <cit.>. Laboratory tests of the inverse square law of gravity by torsion-balance experiments are capable of establishing constraints on each field individually when some conditions are attained. For instance, if the Compton wavelength λ of the radion is greater than R, then, the experimental data put bounds on the strength of the radion <cit.>. On the other hand, under the condition λ<<R, the tensor field is the quantity that is constrained <cit.>. Another important test of the large extra dimensions theories comes from high-energy colliders. It happens that, in these kind of experiments, the fields ϕ and h_μν are not probed with the same accuracy at the tree level <cit.>. The reason is that, according to (<ref>), the source term of the radion is the trace of the energy-momentum tensor. Therefore, radiation or any pure relativistic source is not capable of producing the field ϕ. In fact, the strength of the radion is limited by the rest mass of the particles when the source is on the mass-shell. Another restrictive aspect of the radion-matter interaction is the fact that the effects of the radion field on the motion of the particles are also limited by their rest mass and tends to vanish as the particle's velocity approaches the speed of light. Thus, in a collision with energy E, the influence of the field ϕ, in comparison to the contributions of the tensor h_μν, is reduced by a factor of the order of ( m/E) ^2, where m is the rest mass of the heaviest particle involved in the collision <cit.>. So, it is relevant to find alternative systems from which we can get new and independent bounds for the radion field. In the atomic system, as the matter is found in a non-relativistic regime, the fields can be tested at same level of precision. Motivated by this idea, we intend to investigate the effects of the radion field on the atomic energy spectrum of the muonic Helium. In this system, the nucleus is the source of a gravitational field that is probed by the muon, which plays the role of a test particle. The behavior of a particle in curved spacetimes is dictated by the Lagrangian L=-mc√(-g_ABẋ^Aẋ^B), where ẋ^A is the particle's proper velocity. From this Lagrangian, we can find that, in the weak field regime, the interaction of the particle with an external gravitational field is given by L_I=1/2mĥ_ABẋ^A ẋ^B. In this order of approximation, this Lagrangian can be rewritten as L_I=1/2ĥ_ABP^Aẋ^B, where P^A=∂ L/∂ẋ^A is the conjugated momentum of the particle. Clearly, the term P^Aẋ^B can be interpreted as the flux of the particle's momentum in spacetime. Thus, when we are dealing with fields, this term is equivalent to the energy-momentum tensor T^AB of the field, and therefore the corresponding Lagrangian of interaction will be translated as L_I =1/2∫( ĥ_ABT^AB) d^3+δX, which coincides with the expression obtained in <cit.>. Here we intend to focus our attention on the interaction mediated by the radion. Considering the form of T^AB for confined fields and the decomposition of ĥ_μν, we find that the coupling between the radion and the matter is given by the Lagrangian: L_I=12∫ϕ T_[t]( x) f_[t]( z) d^3+δX where T_[ t] =η_μνT_[t]^μν is the trace of the energy-momentum tensor of the test particle. We are using the t-index in reference to the test particle's quantities. Now, using equation (<ref>), we can express the radion field in terms of T_[ N] , i.e., the trace of the energy-momentum tensor of the nucleus. Thus, it follows from (<ref>), that the behavior of the test particle (i.e. the muon) under the gravitational influence of the nucleus mediated by the radion field is described by the Lagrangian: L_I=12Ĝ_Dc^4δ2( δ+2) ∑_i∫∫T_[ N] ( x_i^') T_[ t] ( x) f_[ N] ( z_i^') f_[ t] ( z) |X⃗-X⃗_i^'| ^1+δ d^3+δX_i^'d^3+δX. In the non-relativistic regime, as the time-time component of the energy-momentum tensor is much greater than the others, then T_[ t] ( x) can be approximated by c^2m_μ ρ_[ μ] ( x), where ρ_[ μ] ( x) is the normalized mass density of the muon, which has a mass m_μ. If we write the normalized mass density of the muon in terms of its field, ψ_[ μ], as ρ_[ μ] =ψ_[ μ] ^† ψ_[ μ], then we can find, from (<ref>), that the associated Hamiltonian is simply H_I=-L_I in this regime. Therefore, the Hamiltonian of the gravitational interaction between the nucleus and the muon through a massless radion field can be written, in this order of approximation, as: H_I=-Ĝ_Dm_μδ/4c^2( δ+2) ∑_i∫∫T_[ N] ( x^') ρ_μ]( x) f_[ N] ( z_i^') f_[ μ] ( z) /|X⃗-X⃗_i^'| ^1+δd^3+δX_i^'d^3+δX. In the case of a massive radion, a decreasing exponential factor, such as η e^-r/λ, should be considered in the above integral. In this exponential, the constant λ is the radion Compton wavelength and the adimensional constant η measures any modification of the radion-matter gravitational coupling that the stabilization mechanism could introduce. The influence of the radion field in the atomic energy levels can be computed from the average value ⟨ H_I⟩ of the Hamiltonian (<ref>) in the atom's states. If we consider that the radion's Compton wavelength is greater than the nuclear radius, then the most stringent constraints for the radion interaction at short distances can be extracted from transitions involving the S-level, due to the overlapping between the wave-functions of the muon and nucleus. It happens that the influence of the gravitational interaction between the muon and the Helium nucleus on these levels cannot be calculated in the infinitely thin brane scenario when δ>1, since, as pointed out in Ref. <cit.>, the internal gravitational potential inside the nucleus is not computable when the functions f( z) are idealized as delta-like distributions. This difficulty can be circumvented in thick brane scenarios, where the brane has a characteristic width and the transverse profiles f are regular distributions concentrated inside the brane. In the leading order, we find ⟨ H_I⟩ _S=-ηĜ_Dm_Nm_μδ/4( δ+2) ε^δ-2|ψ _S(0)| ^2, where ψ_S(0) is the wavefunction of the muon evaluated in the center of the nucleus, m_N is the nuclear mass that follows from the integration of T_[ N] and the parameter ε is a kind of an effective distance in the transversal directions between the nucleus and muon defined by the expression: 1/ε^δ-2=Γ( δ/2) ∫f_N( z^') f_t( z) /| z-z^'| ^δ-2d^δz^'d^δz. The value of this effective distance depends on the overlapping of the transversal profiles of the confined fields. When both transversal functions are identical normal distributions, the parameter ε is equal to the distribution's standard deviation multiplied by two. The gravitational constant G_D, defined in the ambient space, establishes a new length scale ℓ_D^δ+2=G_Dħ/c^3. Therefore, the energy shift on S-level due to the gravitational interaction depends, according to expression (<ref>), on an effective length scale defined by ℓ_eff^4=ℓ_D^δ+2/ε^δ-2 or, equivalently, on the effective energy scale Λ=hc/ℓ_eff. Writing wave functions of the nS-level in terms of the Bohr radius of the muonic Helium ion, we find that ⟨ H_I⟩ _nS=-c^7h^3m_Nm_μ/Λ_r^44π/n^3[a_0(μ^4He^+)]^3, where the effective energy scale of the radion Λ_r is defined in terms of Λ by absorbing the δ-dependent factor and the enhancing factor η, according to the expression: 1/Λ_r^4=δ/( δ^2-4) π^δ/2η/Λ^4. This expression is valid for δ>2. The gravitational shift of the energy of the 2P-level is weaker by a factor of the order of (a_0/R_α)^2 in comparison to (<ref>) and can be neglected in the first approximation. Thus, the gravitational interaction between the muon and the alpha particle will increase the difference between the 2P_1/2 and 2S level by the amount Δ E_G=-⟨ H_I⟩ _2S. § CONSTRAINTS FOR THE ALPHA PARTICLE RADIUS AND RADION'S EFFECTIVE ENERGY SCALE Recent measurements of the 2S-2P transition in the muonic helium-4 ion have tried to shed light on the proton radius puzzle. From these precise transition data, it is possible to extract the root-mean-square charge radius, r_α, of the α particle with high precision <cit.>. The new value is compatible with the charge radius obtained from scattering experiments between the electron and ^4He <cit.>, unlike what happens with analogous measurements involving muonic hydrogen and deuterium. Therefore, we can use these measurements to put new bounds on parameters of non-standard physics theories. The energy difference between the 2P_1/2 and 2S levels of the ( μ^4He) ^+ can be calculated with great accuracy based on the Standard Model. According to Ref. <cit.>, it is given by: Δ E_( 2P_1/2-2S) =[ 1677.690-106.220×( r_α^2/fm^2) ] meV, where the first term has an uncertainty of 0.292 meV and, in the second term, the uncertainty of the numeric coefficient is 0.008 meV. In the thick brane scenario, the gravitational interaction will increase the gap between these levels. According to the calculation of the last section, in the leading order, the previous expression would contain a new term that depends on the unknown parameter Λ_r: Δ E_( 2P_1/2-2S) =[ 1677.690-106.220×( r_α^2/fm^2) +5.182×10^-7×( TeV/Λ_r) ^4] meV. In this equation, Λ_r is expressed in TeV units and its numeric factor is calculated from equation (<ref>), by using the CODATA recommended values for that quantities. The theoretical prediction (<ref>) should be compared with the experimental value Δ E_( 2P_1/2-2S) ^exp=( 1378.521±0.048)meV <cit.>. Let us admit that the theoretical and experimental values should coincide within the combined uncertainty δ E=( δ E_th^2+δ E_exp^2) ^1/2, i.e., Δ E_( 2P_1/2-2S) =Δ E_( 2P_1/2-2S) ^exp±δ E. This condition determines constraints that must be satisfied by Λ_r and r_α together. The shadow regions in Figure 1 correspond to the values of the parameters ( r_α,Λ_r) permitted by the data. The inner and darker area corresponds to regions at 68% confidence level. The wider region has a 95% confidence level. The allowed regions are compatible with the absence of extra dimensions (Λ_r→∞). In this case, the charge radius of the alpha-particle would be r_α=1.67824(83)fm. Figure⁠ 1 explores a domain where alpha-particle radius is in a range delimited from the e-He scattering, namely, r_α(scatt)=1.681±0.004 fm <cit.>. According to the data, the strongest effects of extra-dimensional occur for greater alpha radius r_α. To obtain an estimate for the maximum contribution of the radion to the energy gap between 2P_1/2 and 2S states, let us consider r_α=1.689 fm, i.e., the value in the limit of 2 error-bars of the e-He scattering experiment. In this case, the energy from extra-dimensional gravitational interaction between the alpha nucleus and muon can reach 4.4 meV, corresponding to an effective energy scale of about 18 GeV, found at the border of the 2σ-confidence level region. § THE PROTON RADIUS PUZZLE As we have already mentioned, the size of the alpha particle measured by using muonic spectroscopy agrees with the radius inferred from the scattering of electrons by helium. Moreover, the proton radius obtained from recent measurements of hydrogen atom transitions is compatible with the value derived from muonic spectroscopy <cit.>. These results have reinforced the idea that, probably, it is not necessary to resort to nonstandard physics to explain the proton radius puzzle <cit.>. However, as a definitive explanation is not completely known yet, it is interesting to investigate all possibilities <cit.>. In this section we intend to check the implications of the constraints found here on this issue. More specifically, we want to examine whether the higher-dimensional gravity could be excluded as a possible explanation for the puzzle based on the present data. In the thick brane scenario, the Lamb shift of muonic hydrogen has a new contribution coming from the proton-muon gravitational interaction in higher dimensions. To solve the radius puzzle, the additional energy separation between 2P_1/2 and 2S levels produced by this interaction should be of the order of 0.3 meV, the unexpected excess of energy found experimentally in the Lamb shift of muonic hydrogen. It happens that, according to the data considered here, Λ_r=18 Gev is the effective energy scale of the radion which gives the strongest effect of extra dimensions on the energy levels of the atom and, at the same time, is compatible with the scattering radius of the alpha particle obtained from the e-He interaction. However, with this energy scale, the gravitational interaction between the proton and muon, mediated by the radion, would increase the Lamb shift of the muonic hydrogen just by 0.1 meV approximately. At first sight, we are led to think that higher-dimensional gravity could be discarded as an explanation for the puzzle based on this result. However, we should keep in mind that the effective scale Λ_r depends on G_D, η, and, in particular, on the parameter ε, defined by equation (<ref>), which can be interpreted as an effective distance between the muon and nucleus in the supplementary space. As this effective transverse distance depends on the overlapping of the functions f_muon ( z) and f_N( z) , which describes, respectively, the energy-momentum distribution of the muon and nucleus in the extra space, then the parameter ε could, in principle, assume different values for distinct muonic atoms. The shorter the parameter ε, the stronger the effect. If the radion exchange in the muonic helium can enlarge the energy difference between 2P_1/2 and 2S states by 4.4 meV, then the radion would be capable of increasing the Lamb shift of μ H by 0.3 meV, as long as the effective transversal distance between the muon and proton in μ H be shorter than the transversal distance between the muon and the alpha nucleus in μ ^4He^+. Indeed, considering equation (<ref>) and expressing Λ_r in terms of ε, we can find that the gravitational potential energy of the muonic hydrogen and of muonic helium both in 2S-state obey the following relation: . ⟨ H_I⟩ _2S| _μ H /. ⟨ H_I⟩ _2S| _( μ ^4He) ^+=m_H/m_α( a_0(μ ^4He^+)/a_0(μ H)) ^3( ε_μ ^4He^+/ε_μ H) ^δ-2 If the parameter ε had the same value in the two atoms, the effect of the radion on the energy of the S-states of muonic hydrogen would be almost 40 times smaller when compared with its influence on the same states of muonic helium. According to equation (<ref>), to get . ⟨ H_I⟩ _2S| _μ H =-0.3 meV and . ⟨ H_I⟩ _2S| _( μ^4He) ^+ =-4.4 meV, the muon-proton transversal distance in hydrogen atom should be shorter than the transversal distance between the muon and helium nucleus by the following factor: ε_μ H/ε_μ^4He^+≃( 1/2.74) ^1/(δ-2). Some versions of braneworld models assume that quarks could be localized in different slices of a thick brane <cit.>. Therefore, distinct quarks could be at different transversal distances from the muon when they are in a bound state, forming an atom. Equation (<ref>) is, in this sense, connected to these types of models since it takes this possibility into account. However, the parameter ε depends on the nucleon's mass distribution, which relies mostly on the energy of quark-gluon interactions. Therefore, a theoretical justification of relation (<ref>) demands further investigations on the question of the energy distribution of quark-gluon interactions in the transversal directions. § CONCLUDING REMARKS A very interesting phenomenological implication of braneworld theories is the strengthening of the gravitational interaction over short distances. In this context, muonic atoms arise as promising systems for testing supposed modifications of gravity at the atomic scale. Because the mass of the muon is 207 times greater than the electron's mass, muonic atoms are more sensitive to probe deviation in the gravitational potential than conventional atoms. Indeed, when a muonic hydrogen is in an S -state, the energy of the proton-muon gravitational interaction in a thick brane model is more than 2 billion times greater than the energy of the same interaction acting within the conventional hydrogen. In this paper, we exploited this sensitivity to test the predictions of braneworld theories by using new experimental data on the Lamb shift of the muonic helium-4 ion. More specifically we have calculated explicitly the effects of the radion, the scalar degree of freedom of the higher-dimensional gravity, on the energy difference between the 2P_1/2 and 2S levels of the ( μ ^4He) ^+ . According to the thick braneworld, the gravitational interaction between muon and helium-4 nucleus will increase the gap between these levels by an amount that depends on an effective energy scale of the radion Λ_r . This quantity is a free parameter of the model defined in terms of G_D (the gravitational constant of the higher-dimensional space), η (the enhancing factor the radion field could acquire from a stabilization mechanism, for instance) and ε (an effective transversal distance between the muon and nucleus of the helium). By using the spectroscopic data of muonic helium-4 ions, which are consistent with the radius of the alpha particle inferred from the electron-helium scattering experiment, we have find experimental bounds for Λ_r as function of r_α. According to the available data, the strongest extradimensional effect the radion is capable to produce on the Lamb shift of ( μ^4He) ^+ is a 4.4 meV increase. From this result, we can deduce what is the additional separation energy in the Lamb shift of the μ H caused by the gravitational interaction between the proton and muon mediated by the radion. From the equation (<ref>), we see that, comparatively, the effect on the energy of S-levels produced by the radion exchange in the two muonic atoms depends on the ratio between the transverse distances ε_μ^4He^+ and ε_μ H. When the effective transversal distance between the muon and proton in μ H atom is sufficiently short (see equation (<ref>)), the radion can account for the unexplained excess of energy of 0.3 meV in the Lamb shift of the muonic hydrogen. In the case of six extra dimensions, for instance, ε_μ H should be about 78% of ε_μ^4He^+. The effective transverse distance depends on the profile in the supplementary space of the energy-momentum distribution of the muon and atomic nucleus. Some braneworld models predict that distinct quarks are stuck in different slices of a thick brane <cit.>, therefore their distances from the muon would be different. However, a theoretical justification of the relation (<ref>) demands further investigations on the energy distribution of the quark-gluon interaction (that provides the most part of nucleus mass) in the transverse direction of the 3-brane in this scenario. ASL acknowledge support from CAPES (Grant no. 88887.800922/2023-00). 99 add1N. Arkani-Hamed, S. Dimopoulos and G. R. Dvali, Phys. Lett. B 429, 263 (1998). add2I. Antoniadis, N. Arkani-Hamed, S. Dimopoulos and G. R. Dvali, Phys. Lett. B 436, 257 (1998). rs1L. Randall and R. Sundrum, Phys. Rev. Lett. 83, 3370 (1999). rs2L. Randall and R. Sundrum, Phys. Rev. Lett. 83, 4690 (1999). rubakovV. Rubakov and M. Shaposhnikov, Phys. Lett. B 125, 136 (1983). lhcAad G et al. (Atlas Collaboration), Phys. Rev. Lett. 110, 011802 (2013). murataJ. Murata and S. Tanaka, Class. Quant. Grav. 32, 033001 (2015). hoyle01C. D. Hoyle, U. Schmidt, B. R. Heckel, E. G. Adelberger, J. H. Gundlach, D. J. Kapner and H. E. Swanson, Phys. Rev. Lett. 86, 1418 (2001). hoyle04C. D. Hoyle, D. J. Kapner, B. R. Heckel, E. G. Adelberger, J. H. Gundlach, U. Schmidt and H. E. Swanson, Phys. Rev. D 70, 042004 (2004). hoyle07D. J. Kapner, T. S. Cook, E. G. Adelberger, J. H. Gundlach, B. R. Heckel, C. D. Hoyle and H. E. Swanson, Phys. Rev. Lett. 98, 021101 (2007). atomicspec1Feng Luo, Hongya Liu, Chin. Phys. Lett. 23, 2903, (2006). Feng Luo, Hongya Liu, Int. J. of Theoretical Phys. 46, 606 (2007). atomicspec2Li Z-G, Ni W-T and A. P. Pat ^3 n, Chinese Phys. B, 17, 70 (2008). atomicspec3Zhou Wan-Ping, Zhou Peng, Qiao Hao-Xue, Open Phys., 13, 96 (2015). moleculeE J Salumbides et al, New J. Phys. 17 033015 (2015). safranovaM. S. Safronova, D. Budker, D. DeMille, D. F. J. Kimball, A. Derevianko and C. W. Clark, Rev. Mod. Phys. 90, no.2, 025008 (2018). hC. G. Parthey, A. Matveev, J. Alnis, B. Bernhardt, A. Beyer, R. Holzwarth, A. Maistrou and R. Pohl et al., Phys. Rev. Lett. 107, 203001 (2011). lemos3F. Dahia, E. Maciel and A. S. Lemos, Eur. Phys. J. C 78, no.6, 526 (2018). lemos4A. S. Lemos, G. C. Luna, E. Maciel and F. Dahia, Class. Quant. Grav. 36, no.24, 245021 (2019). rydM. P. A. Jones, R. M. Potvliege and M. Spannowsky, Phys. Rev. Res. 2, no.1, 013244 (2020). lemos5A. S. Lemos, EPL 135, no.1, 11001 (2021) . pHeboundV. V. Nesvizhevsky and K. V. Protasov, Class. Quant. Grav. 21, 4557-4566 (2004). pHeM. Hori, A. S ^3 tér, D. Barna, et al., Nature 475, 484–488, (2011). lemos1F. Dahia and A. S. Lemos, Phys. Rev. D 94, 084033 (2016). protonJ. P. Karr, D. Marchand and E. Voutier, Nature Rev. Phys. 2, no.11, 601-614 (2020). natureR. Pohl et al., Nature 466, 213 (2010). scienceA. Antognini et al., Science 339, 417 (2013). new9C.E. Carlson, B.C. Rislow, Phys. Rev. D 86, 035013 (2012). new10Roberto Onofrio, EPL 104, no.2, 20002 (2013). liZ. Li and X. Chen, arXiv:1303.5146 [hep-ph]. wangL. B. Wang and W. T. Ni, Mod. Phys. Lett. A 28, 1350094 (2013). new11P. Brax and C. Burrage, Phys. Rev. D 91, no.4, 043515 (2015). new12H. Lamm, Phys. Rev. D 92, no.5, 055007 (2015). lemos2F. Dahia and A. S. Lemos, Eur. Phys. J. C 76, no.8, 435 (2016). nature2J. J. Krauth, K. Schuhmann, M. A. Ahmed, F. D. Amaro, P. Amaro, F. Biraben, T. L. Chen, D. S. Covita, A. J. Dax and M. Diepold, et al., Nature 589, no.7843, 527-531 (2021). stabilizationN. Arkani-Hamed, S. Dimopoulos, and J. March-Russell, Phys. Rev. D 63, 064020 (2001). antoniadisI. Antoniadis, K. Benakli, A. Laugier, T. Maillard, Nucl.Phys. B 662, 40 (2003). goldbergW. D. Goldberger and M. B. Wise, Phys. Rev. Lett. 83, 4922 (1999). chackoZ. Chacko, E. Perazzi, Phys. Rev. D 68, 115002 (2003). BDextradimS M M Rasouli et al. Class. Quantum Grav. 31, 115002 (2014). collidersG. F. Giudice, R. Rattazzi and J. D. Wells, Nucl. Phys. B 544, 3-38 (1999). kehagiasA. Kehagias and K. Sfetsos, Phys. Lett. B 472, 39-44 (2000). graviscalarsG. F. Giudice, R. Rattazzi and J. D. Wells, Nucl. Phys. B 595, 250-276 (2001). adelbergREVE. G. Adelberger, B. R. Heckel, and A. E. Nelson, Annu. Rev. Nucl. Part. Sci. 53, 77 (2003). radionE. G. Adelberger, B.R. Heckel, S. Hoedl, C. D. Hoyle, D. J. Kapner and A. Upadhye, Phys. Rev. Lett 98, 131104 (2007). scattI. Sick, Phys. Rev. C 77, 041302 (2008). hesselsN. Bezginov, T. Valdez, M. Horbatsch, A. Marsman, A. C. Vutha and E. A. Hessels, Science 365, no.6457, 1007-1012 (2019). canadaW. Xiong, A. Gasparian, H. Gao, D. Dutta, M. Khandaker, N. Liyanage, E. Pasyuk, C. Peng, X. Bai and L. Ye, et al., Nature 575, no.7781, 147-150 (2019). solutionY. H. Lin, H. W. Hammer and U. G. Meiner, Phys. Rev. Lett. 128, no.5, 052002 (2022). jentchuraU. D. Jentschura. J. Phys. Conf. Ser. 2391, no.1, 012017 (2022). thickbraneN. Arkani-Hamed and M. Schmaltz. Phys. Rev. D, 61, 033005 (2000). thM. Diepold, B. Franke, J. J. Krauth, A. Antognini, F. Kottmann and R. Pohl, Annals Phys. 396, 220-244 (2018).
http://arxiv.org/abs/2307.03283v1
20230706203834
Improved rate-distance trade-offs for quantum codes with restricted connectivity
[ "Nouédyn Baspin", "Venkatesan Guruswami", "Anirudh Krishna", "Ray Li" ]
quant-ph
[ "quant-ph", "cs.IT", "math.IT" ]
Physics-Infused Machine Learning Based Prediction of VTOL Aerodynamics with Sparse Datasets Manaswin Oddiraju [Ph.D. Student, Department of Mechanical and Aerospace Engineering, University at Buffalo, AIAA Student member.], Divyang Amin [Flight Sciences Engineering Lead, Bechamo LLC], Michael Piedmonte[Chief Technical Officer, Bechamo LLC], Souma Chowdhury[Associate Professor, Department of Mechanical and Aerospace Engineering,University at Buffalo, AIAA Senior member. Corresponding author. Email: [email protected]] ==================================================================================================================================================================================================================================================================================================================================================================================================================================================== For quantum error-correcting codes to be realizable, it is important that the qubits subject to the code constraints exhibit some form of limited connectivity. The works of Bravyi & Terhal <cit.> (BT) and Bravyi, Poulin & Terhal <cit.> (BPT) established that geometric locality constrains code properties—for instance n,k,d quantum codes defined by local checks on the D-dimensional lattice must obey k d^2/(D-1)≤ O(n). Baspin and Krishna <cit.> studied the more general question of how the connectivity graph associated with a quantum code constrains the code parameters. These trade-offs apply to a richer class of codes compared to the BPT and BT bounds, which only capture geometrically-local codes. We extend and improve this work, establishing a tighter dimension-distance trade-off as a function of the size of separators in the connectivity graph. We also obtain a distance bound that covers all stabilizer codes with a particular separation profile, rather than only LDPC codes. § INTRODUCTION We refer to an n,k,d quantum code when the code uses n physical qubits to encode k qubits. In other words, is a 2^k-dimensional subspace of ^2^n. Furthermore, the code is robust to any set of erasures of size strictly less than the distance d. One way to represent the error correcting code is using a set of generators = {_1,...,_m}, a commuting set of Pauli operators. The code space is the simultaneous +1-eigenspace of these generators. The generators play the role of parity checks for classical codes; measuring the generators (via a syndrome-extraction circuit) yields a binary string = (s_1,...,s_m) which can be used to deduce and correct errors. A coarse way of capturing the relationship between the generators and the qubits of is via the connectivity graph G. The vertices V of the connectivity graph G correspond to the n qubits of and two qubits q,q' are connected by an edge if they are both in the support of some generator ∈. This relationship is coarse in that we discard information about what Pauli operator acts on a qubit—if qubits q and q' are in the support of , we draw an edge between q and q' regardless of whether the restriction of to q is or . Nevertheless, there is an intimate relationship between the code and some graph invariants of G. This is the central motivation in our paper. In addition to being a useful theoretical tool, the study of graph invariants of G can impact the design and implementation of the syndrome-extraction circuit for in two dimensions. In this case, the edges of the graph correspond to two-qubit gates. Physical architectures might be limited in many ways—for example, in the length of reliable two-qubit gates that can be implemented, the number of two-qubit gates a qubit can be involved in or the number of two-qubit gates that can be implemented in parallel. The exploration of the relationship between codes and the associated connectivity graphs begins with the seminal work of Bravyi & Terhal <cit.>. They studied geometrically-local codes, i.e. codes for which qubits are embedded in D-dimensional Euclidean space and all generators _i ∈ are supported within a ball of radius w = O(1). Bravyi & Terhal proved that the distance of any D-dimensional local code is limited—they showed that d = O(n^1-1/D) . In a beautiful paper shortly thereafter, Bravyi, Poulin & Terhal <cit.> showed geometric locality also imposes a sharp trade-off between k and d. When D=2, they showed that kd^2 = O(n) . We henceforth refer to this result as the BPT bound. While this bound proves strict geometric-locality constrains code parameters, what is the best we can do with some limited set of long-range two-qubit gates? Conversely, what sorts of graphs does one have to realize in 2 dimensions to implement a code with some target parameters? These questions were partially addressed in <cit.>. The separation profile s_G of the graph G was identified as an important metric. To define this quantity, we begin with the graph separator. The separator 𝗌𝖾𝗉(G) is a set of vertices in V such that we can write V = A ⊔𝗌𝖾𝗉(G) ⊔ B. Furthermore, there are no edges between A and B and both partitions obey |A|, |B| ≤ 2n/3. The separator 𝗌𝖾𝗉(G) itself is not unique; we define s_G via a minimization over all separators: s_G = min_(G)(|(G)|) . Studying the separator helps us understand the connectivity of the connectivity graph—it formalizes the intuition that the more difficult it is to partition the graph into two pieces, the more connected it is. To motivate the separation profile, we note that some portions of G can be well connected while others can be poorly connected. The separator itself is a bit crude as it only quantifies the connectivity of the entire graph G. We quantify the connectivity of different patches of the graph G using a separator profile s_G(r) defined as follows. For 1 ≤ r ≤ n, the separation profile corresponds to the separator of all subgraphs of G of size at most r: s_G(r) = max_⊆ G, || ≤ rmin_()(|()|) . In particular, it was shown that the graph separator constrains the distance d and impacts the trade-off between k and d. Throughout this paper, we assume that the separator s_G(r) is a polynomial, i.e. s_G(r) ≤β r^c for some positive constant β and constant c ∈ (0,1]. Furthermore, we assume β and c are independent of n. In light of this, we drop the subscript n from the separation profile and simply write s(r). It was shown in <cit.> that the results of Bravyi & Terhal and by Bravyi, Poulin & Terhal can both be generalized. For a family {_n}_n of n,k,d quantum codes with connectivity graphs G_n with max degree Δ, these generalizations are both expressed in terms of the constant c: d = O(Δ n^c) , kd^2(1-c) = O(n) . Thus, even without the geometry of Euclidean space, codes are constrained by some abstract notion of connectivity. In <cit.>, the implications of these bounds for constructing syndrome-extraction circuits in 2 dimensions was studied. It was shown that constant-rate codes require a lot of long-range connectivity. Two caveats are in order. First, the bound on the distance requires that the connectivity graph G_n has bounded degree. Second, the trade-off between k and d does not match the BPT bound even in 2 dimensions. Specifically, it is known that s_G(r) = O(r^1/2) for geometrically-local graphs in ^2. The k-d trade-off in Eq. (<ref>) implies that kd = O(n) . The mismatch between Eq. (<ref>) and Eq. (<ref>) could mean one of two things—either Eq. (<ref>) is not tight or there exist codes with separators s_G(r) = O(r^1/2) that can achieve better parameters. In this work, we address these problems. First, we obtain a tighter trade-off between k and d that only depends on the size of the separator. [Informal]theoremkd-tradeoff Let be a n,k,d quantum code whose connectivity graph G has separation profile s_G(r) ≤ O(r^c) for some c ∈ (0,1]. Then, kd^1-c^2/c = O(n) . Second, we obtain a degree-oblivious bound on the distance in terms of the separation profile. A similar result was obtained in <cit.>. That result only held for quantum LDPC codes, because it assumed that the connectivity graph had constant degree, but it used a milder assumption that the code had s_G(r)≤ O(r^c) only for r=n, rather than all r. Let be a n,k,d quantum code whose connectivity graph G has separation profile s_G(r) ≤ O(r^c) for some c ∈ (0,1]. Then, d = O(n^c). This represents a strengthening of the Bravyi-Terhal bound as it is now independent of the degree of the connectivity graph G. Both of these theorems follow from our main result, Lemma <ref>. Informally, this lemma states that under certain conditions, there are sets much larger than d that can be erased without losing encoded information. This was the intuition behind the BPT bound and Lemma <ref> can be seen as a generalization of this idea dealing with codes that with connectivity graphs unconstrained by geometric locality. However, there remains a mismatch between the BPT bound in Eq. (<ref>) and Eq. (<ref>). It is still not clear if Theorem <ref> tight—for 2-dimensional local codes, we find kd^3/2 = O(n) , which can be obtained by substituting c=1/2 in Theorem <ref>. To address this problem, we conclude this paper with a purely graph-theoretic conjecture. If this conjecture holds, then it would imply that Theorem <ref> can be strengthened so as to strictly generalize the BPT bound from geometrically-local codes to a broader class of codes characterized by non-expanding connectivity graphs. We state the conjectured generalization in Section <ref>. Related work: Flammia et al. <cit.> generalized the BPT bound to approximate quantum error correcting codes. Even when allowing for imperfect recovery of encoded information, there is a trade-off between k and d imposed by geometric locality. Delfosse extended the BPT bound to local codes in 2-dimensional hyperbolic space <cit.>. Kalachev and Sadov recently studied and generalized the Cleaning Lemma, the central tool used in the Bravyi-Terhal and BPT bounds <cit.>. In the context of circuits, Delfosse, Beverland & Tremblay <cit.> studied syndrome-extraction circuits subject to locality constraints. They demonstrated that, for certain classes of quantum LDPC codes, there is a trade-off between the circuit depth and width (the total number of qubits used). Most recently, Baspin, Fawzi & Shayeghi <cit.> demonstrated that there is a trade-off between the rate of sub-threshold error suppression, the circuit depth and the width. § BACKGROUND AND NOTATION §.§ Stabilizer codes In this paper we focus on stabilizer codes for simplicity. Stabilizer codes are the equivalent of linear codes in the classical setting. The ideas presented can be generalized to commuting projector codes (a superset of stabilizer codes) as in <cit.>. The state of pure states of a qubit is associated with ^2 and the state of n qubit pure states are associated with (^2)^⊗ n. Let = {, , , } denote the Pauli group and _n = ^⊗ n denote the n-qubit Pauli group. A stabilizer code = () is specified by the stabilizer group , an Abelian subgroup of the n-qubit Pauli group that does not contain -. The code is the set of states left invariant under the action of the stabilizer group, i.e. = {|ψ⟩ : |ψ⟩ = |ψ⟩ ∀∈}. Being an Abelian group, we can write in terms of m = (n-k) independent generators {_1,...,_m}. Let = () be a code and let {_1,...,_m} be a choice of generators for . The connectivity graph G = (V, E) associated with {_1,...,_m} is defined as follows: * V = [n], i.e. each vertex is associated with a qubit, and * (u,v) ∈ E if and only if there exists a generator ∈{_1,...,_m} such that u,v ∈(). For a set U ⊆ V, we let U=V∖ U denote the complement of U. We define the outer and inner boundaries of a subset U ⊆ V as follows: * Outer boundary: _+ U is the set of all qubits v ∈U such that for some u ∈ U, the edge {u,v}∈ E. * Inner boundary: _- U is _+U, the outer boundary of the complement of U. Equivalently, it is the set of all nodes u ∈ U such that for some v ∉U, the edge {u,v}∈ E. In the language of graphs, we refer to subsets U as a region of the graph G. The connectivity graph is not only a function of the code , but also of the choice of generating set {_1,...,_m}. Different generating sets can yield the same code but different graphs. Furthermore, in constructing the graph G, we discard information; if two qubits u,v are in the support of a generator , we draw an edge between them regardless of whether the restriction of to u is , or . Therefore, different codes might yield the same connectivity graph. For the sake of brevity, we do not repeat that the graph is a function of the set of generators and assume that the set of generators is fixed and implicit. In spite of this lack of uniqueness, the connectivity graph G is a very useful tool. For our purposes, it allows us to reason about correctability, i.e. to what extent the code is robust to erasure errors. It can also be used for different purposes, see for example <cit.> or <cit.>. For U ⊂ V, let [U] and [V] denote the space of density operators associated with the sets of qubits U and V respectively. The set U is correctable if there exists a recovery map : [U] →[V] such that for any code state ρ_∈, (_U(ρ_)) = ρ_. If d is the distance of the code , then any set of size strictly less than d is correctable. However, some codes contain correctable regions that are much larger than d-1. The connectivity graph makes it easy to represent and visualize this process of constructing large correctable regions. In the following lemma, we present all the tools about correctable regions in connectivity graphs that we will use. We refer to <cit.> for proofs of these statements. We restate some important results about correctable regions in an n,k,d code. * Trivial: Let U ⊂ V be a correctable set. Any subset W ⊂ U is correctable. * Distance: If U ⊂ V such that |U| < d, then U is correctable. * Union Lemma: Sets U_1,…,U_ℓ⊂ V are decoupled if the support of any generator overlaps with at most one U_i. Equivalently, for all i≠ j, U_i and U_j are disjoint and disconnected in the connectivity graph, meaning there are no edges between U_i and U_j. If U_1,…,U_ℓ are decoupled and each U_i is correctable, then ⋃_i=1^ℓ U_i is correctable. * Expansion Lemma: Given correctable regions U, T such that T ⊃_+ U, then T ∪ U is correctable. As originally noted in <cit.>, the size and number of simultaneously correctable regions in the connectivity graph G reveals a lot of information about the code . We state the following lemma from <cit.>. Consider an n,k,d stabilizer code defined on a set of qubits Q, |Q| = n, such that Q = A ⊔ B ⊔ C. If A,B are correctable, then k ≤ |C| . §.§ Correctable sets in graphs With the tools above in Section <ref>, we now can reason about the connectivity graph directly. In particular, all of the properties in Lemma <ref> can be understood entirely in terms of the connectivity graph. As such, we make the following definition. For a graph G, a d-correctable family ℱ is a family of subsets of vertices with the following properties. * Trivial: If U∈ℱ, then any subset of U is in ℱ. * Distance: Every set U with size |U| < d is in ℱ. * Union Lemma: If U_1,…,U_ℓ are disjoint and disconnected sets in ℱ, then ∪_i=1^ℓ U_i is in ℱ. * Expansion Lemma: If U,T∈ℱ and T⊃_+ U, then U∪ T∈ℱ. With this definition, we can now define what it means for a set of vertices (qubits) to be correctable with respect to its graph, rather than with respect to the quantum code. A set U is d-correctable with respect to graph G if U is in every d-correctable family of G. By abuse of notation, we may simply say U is correctable if d and G are understood. We observe that the family of d-correctable sets U with respect to G itself forms a d-correctable family, and it in fact forms the (unique) inclusion-minimum d-correctable family. Equivalently, the family of d-correctable sets is the intersection of all d-correctable families. Intuitively, a set U is correctable in a graph if and only if it can be deduced to be correctable using only properties 1, 2, 3, and 4 above. It follows immediately from Lemma <ref> that the family of correctable sets for a quantum code indeed form a d-correctable family with respect to the connectivity graph. Let 𝒬 be n,k,d quantum code with a connectivity graph G=(V,E). Then the family of sets that are correctable (in the quantum code sense) form a d-correctable family for G. In particular, any subset U⊂ V that is correctable (with respect to the quantum code 𝒬) is d-correctable with respect to the graph G. Corollary <ref>, together with Lemma <ref>, gives us a framework for proving quantum code limitations using purely graph-theoretic arguments: show that for certain families of graphs, any d-correctable family contains two sets A and B such that V∖ (A∪ B)≤ f(n,d), where f(n,d) is our desired upper bound on the dimension of the code. §.§ Separators In this section, we introduce the graph separator, a way to quantify the connectivity of the connectivity graph. Let G = (V, E) be a graph. The separator of G, written sep(G) is the smallest set T ⊂ V such there exists a disjoint partition of the vertices V=U_1⊔ T⊔ U_2 such that * Both of |U_1|, |U_2| ≤2/3|V|, and * There are no edges between U_1 and U_2. The separator is not uniquely defined as multiple sets could have the same size and still split the graph in two disjoint subgraphs. However, this multiplicity does not affect our results, as it suffices to use the existence of one such small set. In this work, we prove limitations of quantum codes whose connectivity graphs are constrained by particular separation profiles. [Separation Profile] For any graph G on n vertices, we define its separation profile s_G : [1,...,n] → [1,...,n], s_G(r) = max{sep(H) : H subgraph of G with at most r vertices} . As an example, note that for a 2-dimensional grid graph G of side length L, s_G(r) = O(√(r)) for all 1 ≤ r ≤ L^2. For a D-dimensional grid graph of side length L, s_G(r) = O(r^c) for all 1 ≤ r ≤ L^D where c = 1-1/D. Like these examples, we consider when the separation profile G is polynomial, i.e. s_G(r) = O(r^c) for all 1 ≤ r ≤ n. This is a natural setting, as a variety of natural graph classes have polynomial separation profiles <cit.>. The following is a useful property of graphs with polynomial separation profiles. Let β≥ 1 and c∈(0,1) be constants. Let G have separation profile s_G(r) ≤β r^c for all r. There exists a constant α=α(β,c) depending on c such that for any r, there is a partition of the vertices of G into sets W_1,…,W_ℓ such that * Each W_i has size at most r, * |_- W_i| ≤α· s_G(r) * ℓ≤α· n/r. In words, this lemma states that the graph can be simultaneously partitioned into many sets W_1,...,W_ℓ. These sets can be large (of size r) and yet have small boundaries (|_- W_i| ≤α· s_G(r)). § PROOFS The main goal of this section is to prove Theorem <ref>. §.§ Warmup As a warmup, we show how to use our framework for quantum code limitations (Lemma <ref> and Corollary <ref>) by proving a weaker version of Theorem <ref>, which appears as <cit.>. Our proof simplifies some aspects of their proof. Let β≥ 1 and c∈ (0,1) be constants. There exists a constant α=α(β,c)>0 such that the following holds for all positive integers n,k,d. Let be an n,k,d quantum codes with connectivity graph G If G has separation profile s_G(r) ≤β r^c for all r, then we have the following trade-off between k and d: kd^2(1-c)≤α n. By Lemma <ref> and Corollary <ref>, Theorem <ref> follows from the following graph theoretic lemma. Let β≥ 1 and c∈ (0,1) be constants. There exists a constant α=α(β,c)>0 such that the following holds for all positive integers n≥ d. Let G be a graph on n vertices with separation profile s_G(r)≤β r^c. Then there exists a partition A⊔ B⊔ C of G such that A and B are d-correctable with respect to G and |C|≤α· n/d^2(1-c). To find the sets A in B above, we use the following lemma. This lemma appears in <cit.>, but we present the proof here for exposition, with a simpler analysis of the bound on the size of V∖ A. Let β≥ 1 and c∈ (0,1) be constants. There exists a constant α=α(β,c)>0 such that the following holds for all positive integers n≥ d. Let G be a graph on n vertices with separation profile s_G(r)≤β r^c for all r. There exists a vertex subset A consisting of disjoint disconnected components of size at most d-1 such that |V∖ A|≤ (α n)/(d^1-c). This proof constructs a new graph from G. To avoid confusion, vertices of are referred to as nodes. Recursively construct a rooted binary tree 𝒯 whose nodes are labeled by subsets of vertices of G. Start with the root node, labeled by the entire vertex set V. While there is a leaf node U with size at least d, give node U two children labeled U_0 and U_1, such that U_0 and U_1 are disjoint and disconnected vertex subsets, each of size at most 2|U|/3, and such that S(U) U∖ (U_0∪ U_1) has size at most β |U|^c. Such children exists by the condition of the separation profile s_G(|U|)≤β· |U|^c. Since each child is a strict subset of its parent, this process clearly terminates, giving a finite binary tree. See Figure <ref> for an illustration. We now choose vertex subset A to be the union of all leaves of the tree, each of which has size at most d-1 by definition. Further, it is easy to check that all leaves are pairwise disjoint by construction. Thus, A has the desired property. We now bound the size of V∖ A. Since any child U' of U has size at most |U'|≤2/3|U|, and the only sets intersecting a node U are its descendants and ancestors, we have that, for all r, the sets whose size is in (2r/3,r] are all disjoint. This implies that there are at most 3n/2r sets with size in (2r/3,r]. Since all internal nodes in the tree have size at least d, we have |V∖ A| = ∑_U internal to 𝒯^ |S(U)| ≤∑_U internal to 𝒯^β|U|^c ≤∑_r=(2/3)^in i=0,1,…,log_2/3(d/n)^∑_U internal to 𝒯 |U|∈ (2r/3, r]^β r^c ≤∑_r=(2/3)^in i=0,1,…,log_2/3(d/n)^3n/2rβ r^c ≤αβ n/d^1-c where α=α(c) is a constant that depends only on c, and the last inequality holds because the sum is a geometric series, so the sum is within a constant factor of the largest term, which occurs when r∼ d. We now can prove Lemma <ref>, and thus Theorem <ref>. Apply Lemma <ref> to the graph G and let α be the implied constant. We obtain a partition A⊔A̅ of the vertex set V such that A has disconnected components of size at most d-1 and |A̅|≤α·n/d^1-c. Each component of A correctable (Distance property of Definition <ref>), so A is correctable. Apply Lemma <ref> again to the subgraph G[A̅] of G induced by A̅ (the graph with vertex set A̅ whose edges are the edges of G with both endpoints in A̅). Then there exists a partition A̅ = B⊔ C such that B consists of disconnected components of size at most d-1 and |C|≤α^2·n/d^2(1-c). Since these components are disconnected in G[A̅], they are also disconnected in G. Each component is correctable, so B is correctable. Thus, we have found our desired partition. §.§ Proof of Theorem <ref> and Theorem <ref> We now prove Theorem <ref>. We establish Theorem <ref> along the way. [Theorem <ref>, formal] Let β≥ 1 and c∈ (0,1) be constants. There exists a constant α=α(β,c)>0 such that the following holds for all positive integers n,k,d. Let be a n,k,d quantum code with connectivity graph G. If the separation profile of G satisfies s_G(r) ≤β r^c for c ∈ (0,1], then we have the following trade-off between k and d: kd^1-c^2/c≤α n. To prove Theorem <ref>, we improve the graph-theoretic lemma, Lemma <ref>, to the following: Let β≥ 1 and c∈ (0,1) be constants. There exists a constant α=α(β,c)>0 such that the following holds for all positive integers n≥ d. Let G be a graph on n vertices with separation profile s_G(r)≤β· r^c for all r. Then there exists a partition A⊔ B⊔ C of G such that A and B are d-correctable and |C|≤α· n/d^(1-c)(1+1/c). By applying our framework (Lemma <ref> and Corollary <ref>), this graph-theoretic lemma (Lemma <ref>) implies our desired quantum code bound (Theorem <ref>). The key improvement is captured in Lemma <ref>, which improves over Lemma <ref> when the sets have small boundary. Lemma <ref> states that sets W of d^1/c vertices with boundary ≪ d are correctable. To prove Lemma <ref>, we recursively remove separators until the graph has connected components of size less than d. Each resulting component is correctable, and crucially the boundary of each component has size <d and thus correctable, and you can recursively use the expansion lemma to deduce the whole set W is correctable. This is analogous to <cit.>, who showed that for geometrically local codes, d× d grids of qubits are correctable as their boundary has size ∼ d. Let β≥ 1 and c∈ (0,1) be constants. The following holds for all positive integers n≥ d. Let G be a graph on n vertices with separation profile s_G(r)≤β r^c for all r. For ε' = 1-(2/3)^c/20β, any set W of at most (ε d)^1/c vertices with outer boundary at most d/4 is d-correctable. Recursively construct a rooted binary tree 𝒯 whose nodes are labeled by subsets of vertices of G. Start with a root vertex, labeled by the set W. While there is a leaf U with size at least d, give vertex U two children labeled U_0 and U_1, such that U_0 and U_1 are disjoint and disconnected vertex subsets, each of size at most 2|U|/3, and such that S(U) U∖ (U_0∪ U_1) has size at most β |U|^c. Such children exists by the condition of the separation profile s_G(|U|)≤β· |U|^c. Since each child is a strict subset of its parent, this process clearly terminates, giving a finite binary tree. See Figure <ref>. By construction of the tree, the boundary of every U in the tree 𝒯 is a subset of ∂_+ W ∪⋃_U' ancestor of U^ S(U'). This is because ∂_+ W contains all neighbors of vertices of U outside W, and ⋃_U' ancestor of U^ S(U') contains all neighbors of U inside W. Thus, ∂_+ U has size at most |∂_+ U| ≤ |∂_+ W| + ∑_U' ancestor of U^ |S(U')| ≤ d/4 + ∑_U' ancestor of U^β|U'|^c ≤ d/4 + ∑_i=0^∞β( (2/3)^i|W| )^c = d/4 + β|W|^c/1-(2/3)^c≤(1/4 + εβ/1-(2/3)^c) d < d/3. The third inequality holds because the largest ancestor has size |W|, and each subsequent descendant's size is a factor less-than-2/3 smaller. We inductively show every set U labeling a node in the tree 𝒯 is correctable. The leaves of 𝒯 are correctable because their size is <d. Suppose that a node U in the tree is such that its children U_0 and U_1 are correctable (Distance property). Then U_0∪ U_1 is correctable as they are disjoint and disconnected (Union Lemma). Furthermore, by (<ref>), |∂(U_0∪ U_1)| ≤ |∂_+ U_0| + |∂_+ U_1| ≤ d/3 + d/3 < d-1 so ∂(U_0∪ U_1) is correctable (Expansion Lemma). Thus, the set ∂(U_0∪ U_1)∪ (U_0∪ U_1) is correctable, so U, which is a subset of ∂(U_0∪ U_1)∪ (U_0∪ U_1), is also correctable (Trivial property). This completes the induction, showing that all nodes of the tree are correctable, so in particular W is correctable. This result already allows us to prove Theorem <ref>, which we first restate here formally. [Theorem <ref>, formal] Let β≥ 1 and c∈ (0,1) be constants. There exists a constant α=α(β,c)>0 such that the following holds for all k≥ 1 and positive integers n and d. Let be a n,k,d quantum code with connectivity graph G. If the separation profile of G satisfies s_G(r) ≤β r^c for c ∈ (0,1], then d ≤α n^c. Applying Lemma <ref> to the set of all vertices V, with the observation that _+ V = ∅. If d ≥ (ε')^-1 n^c, then the set V of all vertices is correctable. Then k = 0, which contradicts the k>0 assumption. Returning to Theorem <ref>, we now can use a known result on partitioning graphs into sets with small boundary, Lemma <ref>. It essentially states that we can obtain the guarantees of Lemma <ref> while additionally guaranteeing small boundary. More precisely, we can remove separators until our graph consists of disjoint disconnected components W_1,…,W_ℓ of size O(d^1/c) and boundary O(d), so that Lemma <ref> applies and W_1,…,W_ℓ are each correctable. Then by the Union lemma W_1,…,W_ℓ is correctable, and the number of vertices removed is, similar to Lemma <ref> with d'=d^1/c, O(n/d^(1-c)/c). Let β≥ 1 and c∈ (0,1) be constants. There exists a constant α=α(β,c)>0 such that the following holds for all positive integers n≥ d. Let G be a graph on n vertices where s_G(r)≤β r^c for all r. There exists a d-correctable set A such that |V∖ A|≤ (α n)/(d^(1-c)/c). Let α_<ref>=α_<ref>(β,c) be given by Lemma <ref>. Let ε'=ε'(β,c) be given by Lemma <ref>, and let ε = min(ε', 1/(4βα_<ref>)). By Lemma <ref> with r (ε d)^1/c, there exists a partition W_1',…,W_ℓ' of the vertices of G such that * Each W_i' has size at most r≤ (ε d)^1/c, * |∂_- W_i'|≤α_<ref>· s_G(r) ≤ d/4, and * ℓ≤α_<ref>· n/d^1/c. The bound on |_- W_i'| follows from the assumption that ε < 1/(4βα_<ref>) which implies α_<ref>· s_G(r) ≤α_<ref>·β (ε d) ≤ d/4 . For each i=1,…,ℓ, let W_i (W_i'∖∂_- W_i'). Note that the outer boundary ∂_+ W_i is contained in ∂_- W_i': if there were an edge wv with w∈ W_i and v in ∂_+ W_i but outside of ∂_- W_i', then v is outside W_i', which means w is in ∂_- W_i. This contradicts the assumption of w∈ W_i. Set A = ⋃_i=1^ℓ W_i. We show that A is correctable. Since ∂_+ W_i ⊂∂_- W_i', we have |∂_+ W_i|≤ |∂_- W_i'| ≤ d/4. Further, |W_i|≤ |W_i'|≤ (ε d)^1/c. By assumption, ε≤ε' and therefore, we can apply Lemma <ref> which implies that W_i is correctable. By construction, ∂_+ W_i is contained in W_i', which is disjoint from W_j', and thus W_j, for all j≠ i. Hence, W_i is disconnected from W_j for all i≠ j. Therefore, by the Union Lemma, A is correctable. To conclude, we bound the size of V∖ A: |V∖ A| = ⋃_i=1^ℓ∂_- W_i'≤α_<ref>· n/(ε d)^1/c·d/4 = α n/d^(1-c)/c. for a constant α=α(β,c) depending only on β and c. We now can prove Lemma <ref>, which implies Theorem <ref> by Lemma <ref> and Corollary <ref>. Apply Lemma <ref> to the graph G and let α_<ref> be the implied constant depedning on β and c. We obtain a correctable set A such that |V∖ A|≤α_<ref>·n/d^(1-c)/c. Apply Lemma <ref> to the subgraph G[V∖ A] of G induced by V∖ A (the graph with vertex set V∖ A whose edges are the edges of G with both endpoints in V∖ A), and let α_<ref> be the implied constant. Then there exists a partition V∖ A = B⊔ C such that B consists of disconnected components of size at most d-1 and |C|≤α_<ref>·α_<ref>·n/d^(1-c)+(1-c)/c . Since these components are disconnected in G[V∖ A], they are also disconnected in G. Each component is correctable (Distance property) in Definition <ref>, so B is correctable. Thus, we have found our desired partition. To obtain a stronger bound, one might try applying Lemma <ref> twice, so that |C|≤ O(n/d^2(1-c)/c). This in particular would match the BPT bound in <cit.>—for example, if c=1/2, this would give |C|≤ O(n/d^2), which yields the bound kd^2≤ O(n). However, a second application of Lemma <ref> cannot be done, at least, in a black-box way. The issue is in the use of the Expansion Lemma. Given correctable sets U and T, the Expansion Lemma implies that U∪ T is correctable when T contains the boundary of U in the original connectivity graph. The proposed second application of Lemma <ref> would not work, because sets that are correctable with respect to the induced subgraph G[V∖ A] (particularly those that are correctable because of the Expansion Lemma) are not necessarily correctable with respect to the original graph G. At the same time, we conjecture that this stronger bound k ≤ O(n/d^2(1-c)/c) can be proved with additional ideas. See Conjecture <ref> and Conjecture <ref>. § CONCLUSIONS In this paper, we found a sharper trade-off between k and d for a quantum code based on the separation profile of its connectivity graph G. We also showed that the distance d is upper bounded in terms of the separator of G and independent of the degree of the graph G. In this way, it improves on the Bravyi-Terhal bound from <cit.>. We observe that the bound in Theorem <ref> is still weaker than the Bravyi-Poulin-Terhal bound for 2-dimensional local codes. We conjecture that the Bravyi-Poulin-Terhal bound can be generalized completely from geometrically-local codes to a broader class of codes characterized by non-expanding connectivity graphs. Let β≥ 1 and c∈ (0,1) be constants. There exists a constant α=α(β,c) such that the following holds for all n≥ d. Let G be a graph with separation profile s_G(r)≤β r^c for all r. Then there exists a partition A⊔ B⊔ C of G such that A and B are d-correctable with respect to graph G and |C|≤α n/d^2(1-c)/c. Using our framework, we can prove the following theorem, which asserts that a stronger bound matching <cit.> follows from Conjecture <ref>. This theorem is a consequence of Lemma <ref> and Corollary <ref>. Assuming Conjecture <ref>, the following holds. Let be an n,k,d quantum code. Let G be the associated connectivity graph. For positive constants β and c ∈ (0,1), if this graph has a separator s_G(r) ≤β r^c, then we have the following trade-off between k and d: kd^2(1-c)/c = O(n) . Another direction is whether we can obtain better bounds using a more fine-grained description of the code. Our bounds use a coarse description of the code, the connectivity graph, but there are more detailed and natural descriptions. It is known that any quantum error correcting code can be described by a chain complex <cit.> C_2 C_1 C_0 , where C_i are _2 vector spaces and _j are linear maps between spaces such that _1 ∘_2 = 0. In this representation, qubits are associated with the elements of C_1 and and stabilizer generators are associated with C_2 and C_0 respectively. The relationship between the stabilizer generators and the qubits is specified by the boundary operators. The adjacency matrix of the connectivity graph we use here is some function of both _2 and _1. It would be interesting to use graph invariants of the bipartite graphs specified by _2 and _1 to arrive at better bounds and tighter trade-offs. § ACKNOWLEDGEMENTS NB is supported by the Australian Research Council via the Centre of Excellence in Engineered Quantum Systems (EQUS) project number CE170100009, and by the Sydney Quantum Academy. VG is supported in part by a Simons Investigator award, a UC Berkeley Initiative for Computational Transformation award, and NSF grant CCF-2210823. AK is supported by the Bloch Postdoctoral Fellowship from Stanford University and NSF grant CCF-1844628. RL acknowledges support from the NSF Mathematical Sciences Postdoctoral Research Fellowships Program under Grant DMS-2203067, and a UC Berkeley Initiative for Computational Transformation award. abbrv
http://arxiv.org/abs/2307.00956v1
20230703121015
Focusing dynamics of 2D Bose gases in the instability regime
[ "Lea Boßmann", "Charlotte Dietze", "Phan Thành Nam" ]
math-ph
[ "math-ph", "cond-mat.quant-gas", "math.MP" ]
Dynamical Graph Echo State Networks with Snapshot Merging for Dissemination Process Classification Ziqiang Li10000-0002-7208-9003 Kantaro Fujiwara10000-0001-8114-7837 Gouhei Tanaka1,20000-0002-6223-4406 August 1, 2023 =========================================================================================================== We consider the dynamics of a 2D Bose gas with an interaction potential of the form N^2β-1w(N^β·) for β∈ (0,3/2). The interaction may be chosen to be negative and large, leading to the instability regime where the corresponding focusing cubic nonlinear Schrödinger equation (NLS) may blow up in finite time. We show that to leading order, the N-body quantum dynamics can be effectively described by the NLS prior to the blow-up time. Moreover, we prove the validity of the Bogoliubov approximation, where the excitations from the condensate are captured in a norm approximation of the many-body dynamics. § INTRODUCTION Since the pioneering work of Bose and Einstein in 1924 <cit.>, and especially after the experimental realization of the Bose-Einstein condensation in 1995 <cit.>, there has been a remarkable effort to understand the macroscopic behavior of interacting Bose gases from first principles. From the mathematical point of view, the theory of interacting Bose gases goes back to Bogoliubov's 1947 paper <cit.>, where he proposed an effective method to transform a weakly interacting Bose gas to a non-interacting one, subject to a modification of the kinetic operator due to the interaction effect. In the present work, we will focus on the rigorous derivation of Bose–Einstein condensation and Bogoliubov's theory for the dynamics of two dimensional bosonic systems, where large attractive interaction potentials admit blow-up phenomena. These blow-up phenomena have been observed in experiments with ulta-cold Bose gases <cit.>. In these settings, first a repulsive interaction was used to prepare an initial state, and then the interaction was switched to attractive by means of Feshbach resonances. When the strength of the attractive interaction was increased beyond a critical threshold, a blow-up process happened, where a large fraction of the condensate was lost <cit.>. The goal of our analysis is to understand this behaviour from a mathematical point of view for a 2D system. §.§ Setting In the framework of many-body quantum physics, the dynamics of a system of N (spinless) bosons in ^2 can be described by the linear N-body Schrödinger equation, ı∂_tΨ_N(t)=H_NΨ_N(t), Ψ_N(0) = Ψ_N,0 , where the wave function Ψ_N(t) belongs to L_s(^2N), the space of square integrable functions of N variables in ^2 satisfying the bosonic symmetry Ψ_N(t,x_1, ..., x_N)= Ψ_N(t,x_σ(1), ..., x_σ(N)) ∀σ∈ S_N, ∀ x_i∈^2 , where S_N denotes the set of all permutations of {1 N}. We will work on a non-relativistic system with short-range interactions, where the underlying Hamiltonian is typically given by H_N=∑_j=1^N(-Δ_j)+1/N-1∑_1≤ j<k≤ N w_N(x_j-x_k) , where (x)=N^2βw(N^β x) , β>0 with a real-valued, even and bounded potential w. We do not impose any positivity condition on w; in particular, the attractive case w≤ 0 is allowed. When w is bounded, the Hamiltonian H_N is self-adjoint on L_s(^2N) with the same domain as the non-interacting Hamiltonian. Therefore, the linear Schrödinger equation (<ref>) has a unique global solution Ψ_N(t)=e^-itH_NΨ_N(0) with t∈, for every initial state Ψ_N(0) ∈ L_s(^2N). The major challenge in the analysis of (<ref>) is that the relevant dimension grows fast as N→∞, making it very difficult to extract helpful information about the quantum system by numerical methods. Therefore, in practice, it is desirable to obtain collective descriptions by reasonable approximations, based on suitable assumptions on the initial state. Roughly speaking, Bose–Einstein condensation (BEC) is the phenomenon where many particles occupy a common quantum state. In particular, this is the case when the N-body wave function is approximately given by a factorized state, namely Ψ_N (t,x_1,x_2,...,x_N) ≈φ (t,x_1)φ(t,x_2)... φ(t,x_N) in an appropriate sense. Here the normalized function φ(t,·)∈ L^2(^2) describes the condensate and its evolution is governed by the cubic nonlinear Schrödinger equation (NLS) ı∂_tφ(t,x)=(-Δ_x+b|φ(t,x)|^2-μ(t))φ(t,x), φ(0,x)=φ_0(x) , where b=∫_^2w, μ(t)=b/2∫_^2 |φ(t,x)|^4. The equation (<ref>) can be formally obtained from (<ref>) using the assumption (<ref>) and the fact that w_N(x)=N^2β w(N^β x)→ b δ(x) weakly. The coupling constant b=∫ w in (<ref>) is crucial. The focusing case b<0 and the defocusing case b>0 correspond to rather different physical situations. In particular, we are interested in the focusing case where the NLS (<ref>) may blow up in finite time, even if the initial datum φ(0) is smooth <cit.>. The possibility of the finite-time blow up is closely related to instability, which we will explain below. §.§ Stability vs. Instability Since the 2D cubic NLS (<ref>) is mass critical, it is well-known from the work of Weinstein <cit.> that the possibility of the finite-time blow up for H^1-solution depends not only on the sign of the interaction, but also on its strength. To be precise, let us denote the critical interaction strength as the optimal constant a^*>0 in the Gagliardo–Nirenberg interpolation inequality[Equivalently, a ^* = Q_L ^2 ^2, where Q is the unique positive radial solution of -Δ Q + Q - Q ^3 = 0.] (∫_^2|∇ f(x)|^2 x̣)(∫_^2|f(x)|^2 x̣)≥a^*/2∫_^2|f(x)|^4 x̣, ∀ f∈ H^1(^2). Then from <cit.>, we have two distinct regimes: * Stability regime: if b>-a^*, then (<ref>) has a unique global solution for all initial data φ_0∈ H^1(^2) satisfying φ_0_L^2=1. * Instability regime: if b<- a^*, then finite-time blow up occurs, for example, for any initial datum φ_0∈ H^1(^2)∩ L^2(^2;|x|^2 x̣) satisfying φ_0_L^2=1 and ∫_^2 |∇φ_0(x) |^2 x̣ + b/2∫_^2 | φ_0(x) |^4 x̣ <0 . We also refer to Baillon–Cazenave–Figueira <cit.> for an earlier result on the global existence for the 2D cubic NLS, Ginibre–Velo <cit.> for a more general existence theory, and Merle <cit.> for a complete characterization of the minimal-mass blow-up solutions in the special case b=-a^*. For an analysis of the minimizer of the corresponding NLS energy functional as b→ a^*, see <cit.>. The existence of a universal blow-up profile was proved in <cit.>. A precise description of the blow-up solutions near the blow-up time was established in <cit.>. For works on the blow-up rate, we refer to <cit.>. For the N-body quantum dynamics (<ref>), the solution Ψ_N(t) exists globally for every L^2-initial datum. Nevertheless, we can still discuss stability and instability regimes by considering the boundedness of the energy per particle. * Stability regime: the system is stable of the second kind if H_N ≥ -CN for some constant C>0 independent of N (see <cit.>). In principle, the many-body stability (<ref>) is stronger than the NLS stability. By testing (<ref>) against factorized states, it is not difficult to see that (<ref>) implies that b=∫ w ≥ - a^*. Conversely, if ∫ w_- > -a^* for w_-=min{w,0} the negative part of w, then it is known that (<ref>) holds for β≤ 1/2 <cit.> (see also <cit.> for related bounds for trapped systems). * Instability regime: if ∫ w < -a^*, then we only have (using w_N_L^∞≤ CN^2β) H_N ≥ -CN^1+2β, and the optimality of the lower bound can be seen by testing against factorized states. In particular, (<ref>) allows the energy per particle to diverge to -∞ as N→∞, which is consistent with blow up of the NLS (<ref>). §.§ The derivation of NLS from many-body dynamics. The rigorous derivation of the NLS has been studied since the 1970s, initiated by Hepp <cit.>, Ginibre–Velo <cit.> and Spohn <cit.>. In the defocusing case w≥ 0, we refer to <cit.> for 1D case, <cit.> for 2D, <cit.> for 3D, <cit.> for the effectively 2D dynamics of strongly confined 3D systems, and the book <cit.> for further results. In the focusing case (w≤ 0) in 2D, most of the existing works in the literature are based on the stability condition ∫ |w_-| < a^*. In this case, the focusing NLS (<ref>) is globally well-posed, and its derivation from the many-body equation (<ref>) was given by Chen–Holmer <cit.> and Jeblick–Pickl <cit.> under the technical addition of a trapping potential like V(x)=|x|^s, enabling them to use stability of the second kind for 0<β<(s+1)/(s+2) by <cit.>. Since the stability (<ref>) was later extended to trapped systems for 0<β<1 <cit.>, the approaches in <cit.> are conceptually applicable for that range of β. In another approach, Nam-Napiórkowski <cit.> used only a weaker form of (<ref>) (but they still require the stability condition ∫ |w_-| < a^*), Thus, being able to removing the trapping potential for all 0<β<1. In the present paper, we will give a novel derivation of the focusing NLS (<ref>) which covers arbitrarily negative potentials w and all β∈ (0,3/2). Without the stability condition ∫ |w_-| < a^*, one only has the very weak bound (<ref>) instead of (<ref>), and the methods in <cit.> do not apply, or apply only to a relatively small range of β. To our knowledge, for an arbitrarily negative potential, the derivation of the NLS (<ref>) prior to the blow-up time is only available for β<1/2, following the methods in <cit.>. Our extended range of β is remarkably large, allowing to make connections to the typical physical setting of dilute Bose gases beyond the mean-field regime, which requires at least β>1/2. Even in the stability regime, our result is new since we allow β≥1. Actually, we will derive (<ref>) from a stronger result, namely a norm approximation of the many-body quantum dynamics which also describes the fluctuations around the condensate in the spirit of Bogoliubov's theory. That result requires further notation and explanation, which we defer to the next section. § MAIN RESULTS Recall that we consider the Schrödinger equation (<ref>) with the Hamiltonian H_N given in (<ref>), where (x)=N^2βw(N^β x) as in (<ref>). We will give rigorous descriptions of the macroscopic behavior of the many-body dynamics Ψ_N(t)=e^-ı tH_NΨ_N,0 when N→∞, including the NLS (<ref>) as the leading order approximation, and a norm approximation in L_s^2(^2N) as the second order approximation. We always impose the following condition on the interaction potential. Let w∈ L^∞(^2) be compactly supported and w(x)=w(-x)∈. Note that we do not put any assumption on the sign and the size of w. In particular, w can be arbitrarily negative. §.§ Derivation of the NLS Let us recall the following well-known result concerning the NLS (<ref>) (see e.g. <cit.>). For every b∈ℝ and φ_0∈ H^1(^2) with φ_0_L^2=1, there exists a unique solution φ∈ C([0,),H^1(^2)) of (<ref>) with a unique maximal time ∈ (0,∞]. Moreover, if <∞, then lim_t ↗φ(t) _H^1= ∞. For non-trivial interactions w, the many-body quantum state Ψ_N(t) is not expected to be close to the factorized state φ(t)^⊗ N in norm (see Theorem <ref> below). Therefore, the leading order approximation (<ref>) has to be understood in an average sense, which can be formulated properly in terms of reduced density matrices. For every normalized vector Ψ_N ∈ L^2_s(^2N), its one-body density matrix is a non-negative operator on L^2(^2) with kernel (x;y) = ∫_^2(N-1)Ψ_N(x,x_2,...,x_N) Ψ_N(y,x_2,...,x_N)x̣_2 ... x̣_N . Equivalently, it can be obtained by taking the partial trace = _2→ N |Ψ_N⟩⟨Ψ_N|. Clearly, if Ψ_N= φ^⊗ N, then = |φ⟩⟨φ| (the rank-one projection onto φ∈ L^2(^2)). In general, the approximation ≈ |φ⟩⟨φ| with respect to the trace norm is an appropriate interpretation of (<ref>). Our first main result is a rigorous derivation of the NLS (<ref>) from (<ref>). Let β∈ (0,3/2), 0<α_1< min (β/2, 1/8, (3-2β)/16) and let w satisfy Assumption <ref>. Let φ(t) be the solution of (<ref>) on the maximal time interval [0,) as in Lemma <ref> with initial datum φ_0∈ H^4(^2), φ_0_L^2=1. Let Ψ_N(t) be the solution of (<ref>) with a normalized initial state Ψ_N,0∈ L^2_s(^2N) satisfying N((1-Δ) q q ) ≤ C, q= 1 - |φ_0⟩⟨φ_0| for some constant C>0. Then for every t∈[0,), we have Bose–Einstein condensation in the state φ(t), i.e., | -|φ(t)⟩⟨φ(t)||≤ C_t N^-α_1 for sufficiently large N, where C_t is independent of N and continuous on [0,). The initial condition (<ref>) means that at the time t=0, the total kinetic energy of all excited particles outside the condensate φ_0 is bounded. Thus, there are only few excitations, which is a key assumption allowing us to control the fluctuations around the condensate φ(t) for all t∈[0,) by using an energy method. The kinetic bound (<ref>) has been proven for the ground state or low-lying excited states of trapped systems with suitable repulsive interactions, see e.g. <cit.>. The following statement is a direct consequence of Theorem <ref> and the definition of in Lemma <ref>. We keep the same assumptions as in Theorem <ref>, and assume additionally that <∞. Then there exists a sequence N(t)∈ℕ such that N(t)→∞ as t↗ and such that lim_t↗1/N(t)Ψ_N(t)(t), ∑_j=1^N(t)( -Δ_j) Ψ_N(t)(t)= lim_t↗ (-Δγ^(1)_Ψ_N(t)) =∞. The implication of Corollary <ref> follows from a well-known argument (see <cit.>): for every t∈[0,), the trace convergence in Theorem <ref> and Fatou's lemma imply that lim inf_N→∞((1-Δ)(t))≥((1-Δ)|φ(t)⟩⟨φ(t)|)=φ(t)^2_H^1(^2). Therefore, if <∞, then the one-body blow-up condition (<ref>) implies the many-body blow-up result (<ref>). Note that (<ref>) is only an inequality, hence the reverse direction, which would imply that the many-body blow-up phenomenon does not occur at any fixed time t∈[0,), cannot be deduced from Theorem <ref>. We expect that this holds true, but a proof would require some additional analysis, which we will not pursue in the present work. Note that in Theorem <ref> we do not make any assumption on the sign of the potential w, hence our result also covers the repulsive case w≥ 0. In this case, Theorem <ref> holds true for all β>0; more precisely, it was proved in <cit.> for the scaling regime e^2N w(e^N x), which leads to a subtle correction producing the scattering length of the potential in the NLS (<ref>) instead of b=∫ w. Our result is mostly interesting in the focusing case w≤ 0. In this case, the NLS (<ref>) was derived in <cit.> under the stability condition ∫ |w_-| < a^* for β∈ (0,1) (see also <cit.> for earlier related results). Without the stability condition, the derivation of the NLS (<ref>) prior to the blow-up time can be shown for β<1/2, following the methods in <cit.>, given the uniform-in-N bounds on the Hartree equation which we prove in Lemma <ref>. Thus, our result, which applies to all β∈ (0,3/2) and to a general w, substantially extends these existing results. §.§ Norm approximation Let us now discuss the fluctuations around the condensate. For this purpose, we first introduce the following Hartree-type equation ı∂_t u_N(t,x)= (-Δ_x+(*|(t, · )|^2)(x)-(t))u_N(t,x) =:(t) (t,x), u_N(0,x)= φ_0(x), with (t)=1/2∬_^2×^2 |(t,x)|^2(x-y)|(t,y)|^2. The Hartree dynamics (<ref>) essentially play the same role as the NLS dynamics (<ref>) in the leading order description, but using the former is slightly more natural for the second order approximation (see <cit.> for a similar choice). In particular, (<ref>) has a unique global solution, and u_N(t)_H^1 is bounded uniformly in N and locally in time when t∈ [0,) with given in Lemma <ref>. Moreover, since u_N(t)→φ(t) in L^2(^2) as N→∞, the convergence (<ref>) remains true if φ(t) is replaced by u_N(t) (see Lemma <ref> for the details). To describe the excitations around the condensate, it is convenient to switch to a Fock space setting where the number of particles is not fixed. Let us introduce the one-body excited space (t)={(t)}^⊥⊂= L^2(^2) and the (bosonic) Fock spaces over , (t)=⊕_k=0^N⊗_^k ⊂(t)=⊕_k≥ 0⊗_^k ⊂ =⊕_k≥ 0⊗_^k . Note that (t) and its subspace ^≤ N(t) are time-dependent via (t), and they are naturally embedded in the full Fock space over . Let us recall the standard second quantization formalism, where the creation and annihilation operators on , (f) and a(f), are defined by ((f)Χ)^(k)(x_1 x_k) = 1/√(k)∑_j=1^kf(x_j)χ^(k-1)(x_1 x_j-1,x_j+1 x_k), ∀ k≥ 1, (a(f)Χ)^(k)(x_1 x_k) = √(k+1)∫_^2x̣f(x)χ^(k+1)(x_1 x_k,x), ∀ k≥ 0 for all f∈ L^2(^2) and Χ=(χ^(k))_k=0^∞∈. It is also convenient to introduce the operator-valued distributions _x, a_x by (f)=∫x̣ f(x) _x , a(f)=∫x̣f(x) a_x , which satisfy the canonical commutation relations [a_x,_y]=δ(x-y) , [a_x,a_y]=[_x,_y]=0 . Using this language, we define the second quantization of one- and two-body operators as (T) =0⊕⊕_k≥ 1∑_j=1^k T_j=∬ T(x;x')_xa_x'' , (S) =0⊕0⊕⊕_k≥ 2∑_1≤ i<j≤ kS_ij =1/2 S(x,y;x',y')_x_ya_x'a_y''' , for T(x,x') and S(x,y;x,y') the kernels of the operators T on and S on ^2 (see, e.g., <cit.>). In this language, the Hamiltonian (<ref>) can be expressed equivalently as H_N = _1(-Δ) + 1/N-1_2 (w_N) on ^N. We also introduce the number operator on Fock space , =_1(𝕀), where 𝕀 this is the identity operator on , and denote the cut-off functions =𝕀(≤ m) , =𝕀( > m), ∀ m∈ (0,∞). Following the approach in <cit.>, the N-body dynamics Ψ_N(t)∈ L^2_s(^2N) can be decomposed as Ψ_N(t)=∑_k=0^N(t)^⊗ (N-k)⊗_sϕ_N^(k)(t) = ∑_k=0^N ((t))^⊗ (N-k)/√((N-k)!)ϕ_N^(k)(t) for ⊗_s the symmetric tensor product and where the vector (t)=(ϕ_N^(k)(t))_k=0^N ∈(t) ⊂(t) describes the excitations around the condensate u_N(t) (see Section <ref> for details). Our goal is to approximate the N-body dynamics (t) by the solution Φ(t) of the simpler evolution equation ı∂_tΦ(t)=(t)Φ(t) Φ(0)=Φ_0 , where (t) denotes the Bogoliubov-Hamiltonian (t)= ( (t)+ K_1(t)) +1/2(∬ K_2(t,x,y)_x_y+). In (<ref>), (t) is defined in the Hartree equation (<ref>), and K_1(t)=q(t)K̃_1(t)q(t) , K_2(t)=q(t)⊗ q(t) K̃_2(t) where q(t)=1-p(t)=1- |(t)⟩⟨(t)| and the kernel of the operator K̃_1(t) and the function K̃_2(t)∈^2 are given by K̃_1(t,x,y) =(t,x)(x-y)(y) , K̃_2(t,x,y) =(t,x)(x-y)(y) . The effective generator (t) emerges from the Bogoliubov approximation when we write the Hamiltonian H_N in the second quantization formalism, then implement the c-number substitution a(u_N),(u_N)↦√(N), and finally keep only the terms that are quadratic in creation and annihilation operators. Note that (t) is an operator on the full Fock space since (t) does not leave (t) invariant, but it does not contradict the fact that Φ(t)∈(t) (see e.g. <cit.> for a detailed explanation). Moreover, (t) is N-dependent, although we do not make this explicit in the notation. The Bogoliubov equation (<ref>) is globally well-posed (see Lemma <ref>). Now we are ready to state our second main result. Let β∈ (0,3/2), 0<α_2< min (1/8, (3-2β)/16) and let w satisfy Assumption <ref>. Let u_N(t) be the solution of the Hartree equation (<ref>) with initial darum φ_0∈ H^4(^2), φ_0=1. Let Φ(t)=(ϕ^(k)(t))_k=0^∞∈(t) be the solution of the Bogoliubov equation (<ref>) with initial datum Φ_0=(ϕ^(k)_0)_k=0^∞∈(0) satisfying Φ_0=1 and ⟨Φ_0, (1-Δ) Φ_0⟩≤ C for some constant C≥0. Let Ψ_N(t) the the solution of the Schrödinger equation (<ref>) with initial datum Ψ_N,0=∑_k=0^Nφ_0^⊗ (N-k)⊗_s ϕ_0^(k)= ∑_k=0^N (φ_0)^⊗ (N-k)/√((N-k)!)ϕ_0^(k). Then, for all t∈ [0,), we have the norm approximation Ψ_N(t)-∑_k=0^N (t)^⊗ (N-k)⊗_sϕ^(k)(t)≤ C_t N^-α_2 , where the constant C_t is independent of N and continuous in t∈ [0,). Note that under the decomposition (<ref>), the kinetic condition (<ref>) is equivalent to condition (<ref>) in Theorem <ref> (see Remark <ref>). Strictly speaking, the state Φ_N,0 in (<ref>) is not normalized in L^2_s(^2N), but the condition (<ref>) ensures that 1≥Ψ_N,0^2 = 1- 𝕀_{> N}Φ_0^2 ≥ 1-⟨Φ_0, (/N) Φ_0⟩≥ 1- CN^-1. The norm approximation in N-body space was given in <cit.> in the mean-field regime (β=0). Recently, also higher order corrections to Bogoliubov's theory in the mean-field regime were derived in <cit.>. For repulsive interaction w≥ 0, the validity of Bogoliubov's theory was extended to 0<β<1 in 3D <cit.> (see also <cit.> for earlier results). In 3D it is natural to restrict 0<β<1 as Bogoliubov's theory is no longer correct in the Gross–Pitaevskii regime, but the method in <cit.> seems applicable to a less restricted range of β in 2D. Hence, our result is mainly interesting in the attractive case w≤ 0, where the validity of Bogoliubov's theory was known only for 0<β<1 in the stability regime ∫ w_- > -a^* <cit.>. Note that in the literature, there are also many works devoted to the dynamics around the coherent states in Fock space, initiated in <cit.>. We refer to <cit.> for a detailed comparison between the N-body setting and the Fock space situation, and the book <cit.> for further results. Our method is also applicable to this setting, but we skip the details. The ideas of our proof are explained in the next section, and the full technical details are provided afterwards. §.§ Notation * We will use C>0 for a general constant which may depend on w and φ_0 and which may vary from line to line. We also use the notation C_t to highlight the time dependence. * When it is unambiguous, we abbreviate the L^2-norm and the corresponding inner product by · and · , ·, respectively. § PROOF STRATEGY In this section we explain the main ingredients of the proof. We will focus on Theorem <ref>, which implies Theorem <ref>. Our approach is based on Bogoliubov's approximation where the fluctuations around the condensate are effectively described by an evolution equation with a quadratic generator in Fock space. The main mathematical challenge is to justify this approximation by rigorous estimates. Let us first give an overview of the proof strategy, and then we come to the detailed setting. As an important input of Bogoliubov's theory <cit.>, we expect that most particles are in the condensate (t), which is governed by the Hartree equation (<ref>). The first step in our analysis is to establish several uniform-in-N bounds for the Hartree dynamics, which is nontrivial due to the instability issue. These one-body estimates require a careful adaptation of the analysis of the NLS (<ref>) in <cit.>, which will be discussed in Section <ref>. In the following, we will focus on the many-body aspects of the proof. In order to extract the excitations, namely the particles outside the condensate, from the N-body wave function Ψ_N(t), we use the unitary transformation U_N(t) introduced in <cit.>. This is a mathematical tool to implement Bogoliubov's c-number substitution <cit.>, resulting in the evolution Φ_N(t)=U_N(t) Ψ_N(t) on the excited Fock space (t) where the generator (t) was computed explicitly in <cit.>. Thus, we can rewrite (<ref>) in terms of excitations as Φ_N (t) - Φ(t)^2 ≤ C_t N^-α_2 for all t∈[0,), where Φ(t) is the solution to the Bogoliubov equation (<ref>). The main difficulty in proving (<ref>) is the lack of the stability of the second kind (<ref>). More precisely, with an arbitrarily negative potential w, we do not expect to have a good lower bound for the generator (t) of Φ_N(t), which in turn prevents us from obtaining a good kinetic bound for Φ_N(t). A key observation in <cit.> is that a weaker version of the stability (<ref>) holds if we restrict to a space of few excitations. Rigorously, for the truncated dynamics Φ_N,M(t) ∈_^≤ M(t) which is associated to the generator (t) with a parameter M=N^1-δ, δ∈ (0,1), it was proved in <cit.> that Φ_N,M satisfies an essentially-uniform kinetic bound, and hence Φ_N,M(t)-Φ(t) can be controlled efficiently (see Lemma <ref> below). Thus, by the triangle inequality, the main missing ingredient for (<ref>) is a good estimate for the norm Φ_N(t)-Φ_N,M(t). For this term, we cannot use the analysis in <cit.>, which crucially relies on the stability condition ∫ |w_-| < a^*. The main novelty of the present paper is the introduction of a new method which does not require any information about the full dynamics Φ_N. This kind of ideas was previously used in <cit.>, where various propagation bounds were established by Cauchy–Schwarz inequalities of the form |⟨Φ_N, A Φ_N,M⟩ | ≤Φ_N A Φ_N,M. However, this approach is insufficient to handle the dilute regime where β>1/2. To improve the Cauchy–Schwarz argument, we decompose 1=^-1 with a suitable weight >0 and split (<ref>) into |⟨Φ_N, A Φ_N,M⟩ | ≤ |⟨Φ_N, ^-1 A Φ_N,M⟩ | + |⟨Φ_N, ^-1 [, A] Φ_N,M⟩ |. The first term on the right-hand side of (<ref>) looks similar to |⟨Φ_N, A Φ_N,M⟩ | but it is easier to bound by the Cauchy–Schwarz inequality provided that we can bound A^* ^-1Φ_N in terms of Φ_N in an average sense. For the second term on the right-hand side, we gain some cancelation due to the commutator [, A], which eventually ensures that ^-1 [, A] Φ_N,M is much smaller than A Φ_N,M. In the remainder of this section, we will provide some further details of the above ingredients. §.§ Reformulation of the Schrödinger equation Our starting point is a reformulation of the Schrödinger equation (<ref>), following the method proposed in <cit.>. Let u_N(t) be the Hartree evolution in (<ref>). To factor out the contribution of the condensate, we use the excitation map U_N(t):^N(t)→(t) defined by U_N(t)=⊕_k=0^N q(t)^⊗ k( a(u_N(t))^N-k/√((N-k)!)) where q(t) = 1- |(t)⟩⟨(t)| as in (<ref>). It was proven in <cit.> that U_N(t) is a unitary transformation and its inverse is given by (<ref>), namely U_N(t)^* Φ = ∑_k=0^N(t)^⊗ (N-k)⊗_sϕ^(k) = ∑_k=0^N ((t))^⊗ (N-k)/√((N-k)!)ϕ^(k) for all Φ=(ϕ_k)_k=0^N ∈(t). Heuristically, the mapping U_N provides an efficient way of focusing on the fluctuations around the Hartree state u_N(t)^⊗ N; in particular, U_N(t) u_N(t)^⊗ N=Ω is the vacuum of (t). It was also proven in <cit.> that for f,g∈(t), we have the following identities on (t): U_N ()a()U_N^* =N- , U_N (f)a() U_N^* =(f)√(N-) , U_N ()a(g)U_N^* =√(N-)a(g) , U_N (f)a(g)U_N^* =(f)a(g) . If Bose–Einstein condensation holds, then, in an average sense, ≪ N in (t). Therefore, (<ref>) can be interpreted as a rigorous implementation of Bogoliubov's c-number substitution <cit.>, where a() and () are formally replaced by the scalar number √(N). Note that from (<ref>) we have U_N^*_1(qAq)U_N=_1(qAq) for any operator A on . Consequently, (<ref>) is equivalent to (<ref>). Now we consider the transformed dynamics Φ_N(t)=U_N(t)Ψ_N(t). The Schrödinger equation (<ref>) can be written in the equivalent form ı∂_t(t)=(t)(t) (0)= U_N(0)^* Ψ_N,0. Here, the generator (t) can be computed explicitly, using the second-quantized form (<ref>) and the rules (<ref>) (see <cit.>), as (t)=(ı∂_tU_N(t))U_N^*(t)+U_N(t)H_NU_N^*(t)=1/2∑_j=0^4(_j+_j^*) with _0 = ()+(K_1)N-/N-1+(q(t)(+Δ)q(t))1-/N-1 , _1 =-2(q(t)(*|(t)|^2)(t)) √(N-)/N-1 , _2 =∬ K_2(t,x,y)_x_y√((N-)(N--1))/N-1 , _3 =(q(t)⊗ q(t) 𝕀⊗ q(t))(x,y;x',y')(t,x) ×_x _y a_y''' √(N-)/N-1, _4 =1/N-1_2 (q(t)⊗ q(t) q(t)⊗ q(t)). Recall that (t) is given in (<ref>), and K_1(t) and K_2(t) are given in (<ref>). In the above notation, denotes the function :^2→ in _1(t), and the two-body multiplication operator (x-y) in _3(t) and _4(t). §.§ Simplified dynamics Following Bogoliubov's heuristic ideas <cit.>, we consider a simplification of (<ref>), where only the quadratic terms _0 and _2 in the generator are kept. This leads to the Bogoliubov equation (<ref>), whose well-posedness is well-known, see for example <cit.>. Let w satisfy Assumption <ref>, let u_N(t) be the Hartree evolution in (<ref>) with initial state φ_0∈ H^4(^2), and let Φ_0∈(0) be a normalized vector satisfying (<ref>). Then the Bogoliubov equation (<ref>) with initial condition Φ_0 has a unique global solution Φ∈ C( [0,∞),) ∩ L^∞_loc( (0,∞),𝒬((1-Δ))) and Φ(t)∈(t) for all t>0. Moreover, for all t∈[0,) and any >0, we have Φ(t),(1-Δ)Φ(t)≤ C_t,ε N^ε , where is given in Lemma <ref> and the constant C_t, is independent of N. The global well-posedness of Φ(t) is shown in <cit.>. The kinetic bound (<ref>) follows from the analysis in <cit.> and the uniform bounds of u_N(t), which will be given later in Lemma <ref>. In order to estimate the difference (t)-Φ(t), we follow <cit.> and introduce the truncated dynamics (t) ∈_^≤ M(t), which solve the equation ı∂_t(t)=(t)(t) (0)=Φ_0 . As explained in <cit.>, the main advantage of (<ref>) is that the truncated generator is stable, namely (t)≥1/2_1(-Δ) - C_t, N^ for all t∈[0,). This allows to establish an efficient kinetic bound for (t), which is not available for Φ_N. Consequently, it is much easier to compare (t) with the Bogoliubov dynamics. We collect some known properties of (t) in the following lemma. We keep the assumptions of Lemma <ref>. Let M=N^1-δ for some constant δ∈ (0,1). Then the equation (<ref>) has a unique global solution (t) ∈_^≤ M(t) with t∈ [0,∞). Moreover, for every t∈[0,), we have (t),(1-Δ)(t)≤ C_t,εN^ε and (t)-Φ(t)^2≤ C_t,εN^ε(√(M/N)+1/M). The global well-posedness of (t) follows from the general method in <cit.> (see also <cit.>). Given the uniform bounds of u_N(t) in Lemma <ref>, the bounds (<ref>) and (<ref>) follow from the arguments in <cit.> and <cit.>, respectively. §.§ From the truncated to the full dynamics Given Lemma <ref>, the missing piece for the proof of Theorem <ref> is an estimate for (t)-(t). The main new ingredient of the present paper is the following bound: We keep the assumptions of Lemma <ref>. Let M=N^1-δ for some constant δ∈ (0,1). Let Φ_N and Φ_N,M be solutions of (<ref>) and (<ref>), with initial data Φ_N(0)=𝕀^≤ NΦ_0, Φ_N,M(0)=𝕀^≤ MΦ_0, respectively. Then for every t∈[0,) and every >0, we have (t)-(t)^2 ≤ C_t, N^(1/√(M)+N^β/M^3/2). Eventually, we will take δ>0 small, hence the condition β<3/2 is needed to ensure that the error term N^β/M^3/2 on the right hand side of (<ref>) is negligible. In order to prove Proposition <ref>, by norm conservation of Φ_N(t) and Φ_N,M(t), it suffices to show that (t),(t) is close to 1. For technical reasons, it is more convenient to consider (t),f_M^2(t) with f_M a smoothened version 𝕀^≤ M. To be precise, we fix a smooth function f:→[0,1] such that f(s)=1 for s≤ 1/2 and f(s)=0 for s≥ 1, and define the operator on by =f(/M). We will deduce Proposition <ref> from a Grönwall argument and the estimate |/ṭ(t),f_M^2(t)|≤ C_t,εN^ε(1/√(M)+N^β/M^3/2). It remains to explain the proof of (<ref>). Let us drop the time dependence from the notation where it is unambiguous. From the equations (<ref>) and (<ref>), we have |/ṭ(t),f_M^2(t)|= |,[,^2]| since ∈ and ^2=^2. Then it is straightforward to decompose into the sum of _j as in (<ref>). Since f_M is a function of , only the particle number non-preserving terms , and contribute to the commutator. One of the most difficult terms is the quadratic one |Φ_N,[_2,^2]|, where two annihilation operators hit Φ_N. Since _2 only changes the number of particles by at most 2, the commutator with ^2 allows us to gain a factor M^-1. Therefore, estimating (<ref>) essentially boils down to proving a bound for 1/M|∬ (x)(y)(x-y)Φ_N,_x_y| . In <cit.>, a variant of this term was estimated using a kinetic bound for Φ_N based on the method in <cit.> and the stability condition ∫ |w_-| < a^*. In the present paper, since we are considering a general potential w including the instability regime ∫ w_- < - a^*, we only have ⟨Φ_N, _1 (1-Δ)Φ_N ⟩≤ C N^1+2β, which can be deduced from a variant of the energy lower bound (<ref>). However, the latter bound is too weak, and inserting it in the analysis in <cit.> produces a solution only for β<1/2. Another idea, which can be extracted from the approach in <cit.>, is to handle (<ref>) by the Cauchy–Schwarz inequality |⟨Φ_N, _x _y Φ_N,M⟩| ≤Φ_N_x _y Φ_N,M. (To be precise, a variant of this argument was used in <cit.> to compare Φ_N directly with the Bogoliubov dynamics Φ.) The advantage of (<ref>) is that no information about Φ_N is needed. However, since we have to couple (<ref>) with the singular potential (x-y) in (<ref>), we eventually obtain a large factor w_N_L^∞∼ N^2β, and the final bound is only good for β<1/2. Thus, to cover the extended range β∈ (0,3/2), new ideas are needed to handle (<ref>). In the present paper, on the one hand, we will not rely on any information of Φ_N; moreover, instead of using directly (<ref>) we will further decompose (<ref>) by introducing a weight given by :=(||)+1 = 1/2∫|(x-y)|_x_ya_xa_y+1. By inserting 1=^-1/2^1/2, we can split _x _y = ^-1/2_x _y ^1/2 + ^-1/2 [^1/2,_x _y]. Then, using the triangle inequality we can bound (<ref>) by 1/M∬ |(x)| |(y)| |(x-y)| | Φ_N, ^-1/2_x _y ^1/2| + 1/M∬ |(x)| |(y)| |(x-y)| |Φ_N, ^-1/2 [^1/2,_x _y] |. The key point is that although the first term in (<ref>) looks similar to (<ref>), it is much easier to control. Indeed, by the Cauchy–Schwarz inequality, we have 1/M∬ |(x)| |(y)| |(x-y)| | Φ_N, ^-1/2_x _y ^1/2| ≤1/M∬ |(x)| |(y)| |(x-y)| a_x a_y ^-1/2Φ_N^1/2 ≤1/M( ∬ |(x-y)| a_x a_y ^-1/2Φ_N^2 )^1/2× ×( ∬x̣ỵ |(x)|^2 |(y)|^2 |(x-y)| )^1/2^1/2. Then, by the definition of , we can bound ∬ |(x-y)| a_x a_y ^-1/2Φ_N^2 = ⟨Φ_N, ^-1/2_2(||) ^-1/2Φ_N⟩ ≤Φ_N^2, without relying on any information on Φ_N. The other factors in (<ref>) can be bounded efficiently using _L^1≤ C and good estimates on Φ_N,M and . All this allows to bound (<ref>) by C_t,N^/√(M)Φ_N, which appears as the first error term on the right-hand side of (<ref>). We still have to bound the second term in (<ref>). This term looks complicated, but in principle, we gain a huge cancelation from the commutator [^1/2,_x_y] due to the fact that is a “local operator”. To make it more transparent, we can use the formula ^1/2=1/π∫_0^∞1/√(s)/+s to write, for any operator B, [^1/2,B] =1/π∫_0^∞1/√(s)[ /+s, B ] = 1/π∫_0^∞√(s)/+s[,B]1/+s. In particular, a straightforward computation shows that [,_x_y] = [(||),_x_y] =|(x-y)|_x_y+∫(|(z-x)|+|(z-y|)_x_y_za_z. Let us take |(x-y)|_x_y from (<ref>) and insert it in (<ref>). The corresponding contribution from the second term in (<ref>) can be controlled by 1/M∫_0^∞√(s)∬ |(x)| |(y)| |(x-y)|^2 a_xa_y^-1/2/+sΦ_N 1/+s. The resolvents (+s)^-1 are important in two respects: one the one hand, they provide sufficient decay in s via the estimate (+s)^-1≤(1+s)^-1. On the other hand, they compensate for the singular interaction, which is similar to the argument in (<ref>) although ||^2 is way more singular than ||. To combine these two ideas, we use again the Cauchy-Schwarz inequality on ^2×^2, where we split ||^2=||^1+/2||^1-/2 for >0 small and estimate ^-1/2/+sΓ̣(||^2-)^-1/2/+s≤^1-/(+s)^2≤1/(1+s)^1+. Here, we used that (||^2-)≤ (_2(||))^2-≤^2- , which relies heavily on the locality of , namely (||) is the second quantization of a two-body multiplication operator (see Lemma <ref>). Finally, by calculating the L^2-norm of ||^1+/2, which appears in (<ref>), we eventually obtain C_t,N^ N^β/M^3/2, the second error term on the right-hand side of (<ref>). This completes our overview of the main ingredients of the proof. Organization of the paper. In Section <ref>, we establish uniform-in-N estimates for the Hartree dynamics. The most technical part of the paper is contained in Section <ref> where we prove Proposition <ref>. From this, we conclude the main results in Section <ref>. § UNIFORM ESTIMATES FOR HARTREE EVOLUTION In this section, we consider the Hartree evolution in (<ref>). By Assumption <ref>, it is globally well-posed in H^k, k∈{1,2,…} for any fixed N by <cit.>. However, it is a priori not clear whether (t)_H^k is bounded uniformly in N for fixed t∈[0,). In the following lemma, we prove such uniform bounds for all times prior to the NLS blow-up time . Let w satisfy Assumption <ref>. Let φ_0∈ H^4(^2) and as in Lemma <ref>. Then, for every T∈[0,), there exists a constant C=C(T,φ_0)>0 such that for all t∈[0,T] and all N sufficiently large, (t)_L^∞≤ C(t)_H^2(^2)≤ C, ∂_t (t)_H^2(^2)≤ C. Moreover, for φ(t) the solution of the NLS (<ref>), it holds that (t)-φ(t)_L^2(^2)≤ CN^-β . For interactions satisfying the stability condition ∫_^2|w_-|<a^*, (<ref>) has been shown in <cit.>. As explained in <cit.>, the key point is to get the uniform bound (t)_H^1(^2)≤ C, and the rest follows from a rather general argument. In the stability regime considered in <cit.>, this bound follows immediately from energy conservation and (<ref>), namely ℰ[u_N(t)] = ∇ u_N(t) + 1/2∬|(t,x)|^2(x-y)|(t,y)|^2 ≥∇ u_N(t) - 1/2(w_N)_-_L^1u_N(t)_L^4^4 ≥∇(t)^2 (1- ∫_^2|w_-|/a^*). For a general w in our Lemma <ref>, the estimate (<ref>) is not available, hence the estimate of (t)_H^1(^2) is more complicated. Instead of studying u_N directly as in <cit.>, we will focus on the difference (t)=(t)-φ(t). We will bound (t) by a bootstrap argument consisting of two steps: * If (t)≤δ=√(a^*/32w_L^1), then ∇θ_N(t)≤ C. This follows from energy conservation and the a priori bound φ(t)_H^1(^2)≤ C on [0,T]. * If (s)≤δ for all s∈[0,t], then (t)≤ N^-β≪δ for sufficiently large N. To prove this, we use Step 1 and Gronwall's lemma. The conclusion thus follows from the continuity of the map t↦(t) and the initial condition (0)=0. Now let us go to the details. For simplicity, we consider the solutions (t) and φ(t) of the equations (<ref>) and (<ref>) without the phases (t) and μ(t), respectively. Due to the gauge transformation ↦exp{-ı∫_0^tμ(s)}, this does not change the N-dependence of the estimates (<ref>). Let δ=√(a^*/(32w_L^1)) with a^* as in (<ref>). Step 1. Assume that (t)≤δ for some t∈ [0,T]. Using the energy conservation of (<ref>) and the fact _L^1=w_L^1, we can bound ℰ[u_N(t)] = ℰ[φ_0] = ∇φ_0^2 + 1/2∬ |φ_0(x)|^2 w_N(x-y) |φ_0(y)|^2 x̣ỵ ≤∇φ_0^2 + 1/2w_L^1φ_0_L^4^4 ≤ C. On the other hand, using (<ref>) and the assumption (t)≤δ, we can bound θ_N(t) _L^4^4 ≤2 δ^2 /a^*∇θ_N(t) ^2 = 1/16 w_L^1∇θ_N(t) ^2. Combining with 1/2∇(t)^2≤∇(t)^2+∇φ(t)^2, we find that ℰ_N[(t)] ≥∇(t)^2-1/2w_L^1(t)+φ(t)_L^4^4 ≥( 1/2∇(t)^2 - ∇φ(t)^2 ) - 4 w_L^1 (θ_N(t) _L^4^4 + φ (t) _L^4^4 ) ≥1/4∇(t)^2-C , where we used that ∇φ(t)≤ C on [0,T]. Consequently, (<ref>) and (<ref>) imply that ∇(t)^2≤ C. Step 2. Let s∈[0,t] and assume that (s)≤δ. Then, dropping the time dependence from the notation, we find that 1/2|∂_s(s)^2| =|⟨,(*||^2)+(*(||^2-|φ|^2))φ +(*|φ|^2-b|φ|^2)φ⟩| ≤φ_L^∞(*(||^2-|φ|^2) + *|φ|^2-b|φ|^2) =:φ_L^∞(A_1+A_2) . On the right hand side of (<ref>), we have (s)≤δ by our assumption, and φ(s)_L^∞≤ C since φ(s)∈ H^2(^2) by <cit.> and Sobolev's embedding H^2(^2)⊂ L^∞(^2) (<cit.>). Estimate of A_1. Using | ||^2-|φ|^2 |=|(||-|φ|)(||+|φ|) |≤ ||^2+2|φ||| , we can bound A_1≤w_N_L^1 (θ_N_L^4^2 + 2φ_L^∞θ_N) ≤ C . In the last estimate, we used (<ref>) and the bound ∇(t)^2≤ C from Step 1. Estimate of A_2. Observing that b=ŵ_N(0) and ŵ_N(ξ)=ŵ(ξ/N^β), Plancherel's theorem yields A_2≤ŵ(·/N^β)-ŵ(0)/|·|_L^∞|·||̂φ̂|̂^̂2̂_L^2≤ CN^-β . Here, we used ∇φ≤ C and the fact that ŵ is Lipschitz. In summary, inserting (<ref>) and (<ref>) in (<ref>), we arrive at ∂_s(s)^2≤ C((s)^2+N^-2β). Consequently, we obtain (t)≤ C N^-β by Gronwall's lemma since (0)=0. Conclusion. Define t_N^max=sup{t∈[0,T]:(t)≤δ} . Assume that t_N^max< T. By <cit.>, the map [0,T]∋ t↦(t) is continuous, hence (s)≤δ for s∈[0,t_N^max]. By Step 2, this implies that (t_N^max)≤ CN^-β<δ for sufficiently large N, which contradicts t_N^max<T. Hence, t_N^max≥ T, and consequently (t)≤δ, ∀ t∈[0,T]. By Step 1, we get ∇(t)^2≤ C. Therefore, u_N(t)_H^1≤(t) _H^1 + φ(t) _H^1≤ C, ∀ t∈[0,T] . The remaining estimates in (<ref>) can be deduced from the H^1-bound as in <cit.>, using Duhamel's formula. The bound (<ref>) also follows from the above argument, where the error term N^-β comes from (<ref>). § FROM THE TRUNCATED TO THE FULL DYNAMICS In this section we prove Proposition <ref>. As explained in Section <ref>, the key step is to prove the propagation bound (<ref>). We use (<ref>) and (<ref>) to decompose |/ṭ(t),f_M^2(t)| ≤∑_j=1^3 |, [(_j + _j^*) ,^2]| with _j given in (<ref>). In the next subsections, we will handle the cases j=1,2,3 separately, and then conclude (<ref>) as well as Proposition <ref>. As a preparation, let us collect here two auxiliary estimates which will be used repeatedly in this section. The first one is a simple Sobolev-type estimate. Let W ∈ L^s(^2) with s∈(1,2] and denote by W(x-y) the corresponding two-body multiplication operator. Then _2(|W(x-y)|) ≤ C_s W_L^s(^2)_1(1-Δ) as operators on . In particular, Assumption <ref> guarantees that w∈ L^1+(^2) for every >0, hence Lemma <ref> implies that _2(|w_N(x-y)|) ≤ C_ N^_1(1-Δ). Here we used the fact that ∫_^2 |w_N(x)|^αx̣ = N^2β (α-1)∫ |w(x)|^αx̣, ∀α>0. Using Sobolev's embedding L^s'(^2)⊃ H^1(^2) with 1/s' + 1/s=1 <cit.>, we have ∬x̣ỵ |W(x-y)| |f(x,y)|^2 ≤ C_s ∫x̣ W(x-·)_L^s(^2)f(x,·)_H^1(^2)^2 = C_s W_L^s(^2)⟨ f, (1-Δ_y) f⟩_L^2(^2×^2) for all f∈ H^1(^2×^2). Therefore, we have the 2-body inequality |W(x-y)| ≤ C_s W_L^s(^2) (1-Δ_y) for each y∈^2, which implies the second-quantized form (<ref>). The second estimate is concerned with the second quantization of a two-body multiplication operator: Let A≥0 be a multiplication operator on ^2 such that A(x,y)=A(y,x) and let s∈ [1,∞). Then (A^s)≤[(A)]^s . On every k-particle sector ^k, k≥ 2, we have (A^s) = ∑_1≤ i<j ≤ k A_ij^s ≤( ∑_1≤ i<j ≤ k A_ij)^s= [(A)]^s. This concludes the proof since (A) preserves the particle number. §.§ Estimate of the linear terms We consider first the linear terms in (<ref>). For every t∈[0,) and >0, we have | ,[ (_1 + _1^*) ,^2]| ≤ C_t,N^/√(N) . From the definition of _1 in (<ref>), we obtain | ⟨, [_1,^2] Φ_N,M⟩| = 2 | ⟨, (q(t)(*||^2)) ω_1 Φ_N,M⟩| with ω_1 = √(N-)/N-1( f^2( /M) - f^2( +1/M) ) , where we used that g()_x= _x g(+1). For f as in (<ref>), we have |ω_1|≤C/M√(N)𝕀^≤ M in the sense of operators on (t). We will use the simple bound a(v) (v) = (v) a(v) + v^2 ≤ (+1) v^2 , where v= q(t)(*||^2) satisfies v_L^2≤ (*||^2)_L^2≤_L^1_L^2^2 _L^∞≤ C_t by Lemma <ref>. Therefore, by the Cauchy–Schwarz inequality, we deduce from (<ref>) and (<ref>) that | ⟨, [_1,^2] Φ_N,M⟩| ≤ 2 (q(t)(*||^2)) (+1)^-1/2_ op× × (+1)^1/2ω_1 Φ_N,M ≤ C_t (+1)^3/2/M√(N)𝕀^≤ MΦ_N,M≤ C_t,N^/√(N) . In the last estimate, we used that ^1/2Φ_N,M^2 = ⟨Φ_N,M, Φ_N,M⟩≤ C_t,N^ , which follows from the kinetic estimate (<ref>) in Lemma <ref>. Similarly, we also get | ⟨, [_1^*,^2] Φ_N,M⟩| ≤ C_t,N^/√(N) since | ⟨, [_1^*,^2] Φ_N,M⟩| = 2 | ⟨, a (q(t)(*||^2)) ω̃_1 Φ_N,M⟩| , where ω̃_1 = √(N-+1)/N-1( f^2( /M) - f^2( -1/M) ) as operator on . From (<ref>) and (<ref>), we obtain (<ref>). §.§ Estimate of the quadratic terms We turn to the quadratic terms in (<ref>). For every t∈[0,) and >0, we have | ,[,^2]| ≤ C_t,( 1/√(M) + N^β/M^3/2) N^ , | ,[^*,^2]| ≤ C_t,N^/M. The bound (<ref>) is one of the most difficult estimates in this section. We use the strategy explained in Section <ref>. Step 1. Let us abbreviate ω_2=√((N-)(N--1))/N-1( f^2(/M) - f^2 (+2/M) ) as operator on . For N≥ 2, we have |ω_2|≤C/M𝕀^>M/2 . We also observe that in the relevant estimate for _2, K_2=q⊗ qK̃_2 in (<ref>) can be replaced by K̃_2 as for any χ,χ'∈(t) we have χ,∬ K_2(x,y)(x)(y)_x_yχ' = χ,∬K̃_2(x,y)(x)(y)_x_yχ'. Hence, we can write ,[_2,^2] = ∬(x-y) (x) (y) ⟨, _x _y ω_2 Φ_N,M⟩. By decomposing _x _y = ^-1/2_x _y ^1/2 + ^-1/2 [^1/2,_x _y] with =(|w_N|)+1 as in (<ref>) and using the triangle inequality, we find that |,[,^2]| ≤ℰ_1+ ℰ_2 where ℰ_1 = ∬|(x-y)||(x)| |(y)||⟨, ^-1/2_x _y ^1/2ω_2 Φ_N,M⟩|, ℰ_2 = ∬|(x-y)||(x)| |(y)| |, ^-1/2[^1/2,_x_y]ω_2|. Step 2. Now let us estimate _1. By the Cauchy-Schwarz inequality, ℰ_1 ≤∬|(x-y)||(x)| |(y)| a_x a_y ^-1/2^1/2ω_2 Φ_N,M ≤(∬x̣ |(x-y) |(x)|^2 |(y)|^2)^1/2× ×( ∬x̣ỵ |w_N(x-y)| a_x a_y ^-1/2^2 )^1/2^1/2ω_2 Φ_N,M. From Lemma <ref>, we can bound ∬x̣ |(x-y) |(x)|^2 |(y)|^2 ≤_L^∞^2 u_N_L^2^2 w_N_L^1≤ C_t. Moreover, (<ref>) yields ∬x̣ỵ |w_N(x-y)| a_x a_y ^-1/2^2 ≤^2 ≤ 1. From Lemma <ref> and the kinetic estimate in Lemma <ref>, we get ⟨Φ_N,M , (|w_N|) Φ_N,M⟩≤ C_N^ε⟨Φ_N,M, M (1-Δ) Φ_N,M⟩≤ C_t, M N^2. Combining this with (<ref>) and the fact that commutes with ω_2, we find that ^1/2ω_2 Φ_N,M^2 ≤C/M^2⟨Φ_N,M, Φ_N,M⟩≤C_t, N^/M. Inserting (<ref>), (<ref>) and (<ref>) in (<ref>), we conclude that ℰ_1 ≤C_t,ε N^ε/√(M) for every constant >0. Step 3. We turn to estimate the second term _2, which is more involved. Using (<ref>) and (<ref>), we get [^1/2,_x_y] =1/π∫_0^∞ √(s)/+s |(x-y)|_x_y 1/+s +1/π∫_0^∞∫ẓ √(s)/+s(|(x-z)|+|(y-z)|)_x_y_za_z 1/+s. This allows us to decompose _2 =∬|(x-y)||(x)| |(y)| |, ^-1/2[^1/2,_x_y]ω_2| ≤ C (_2,1 + _2,2) where _2,1 = ∫_0^∞ṣ√(s)∬|(x-y)|^2 |(x)| |(y)| × ×|, ^-1/2/+s_x_y 1/+sω_2|, _2,2 =∫_0^∞ṣ√(s)∭ẓ |(x-y)| |w_N(x-z)| |(x)| |(y)| × ×|, ^-1/2/+s_x_y _z a_z 1/+sω_2|. Estimate of _2,1. By the Cauchy–Schwarz inequality, we find for any constant ε∈(0,1) that _2,1 ≤∫_0^∞ṣ√(s)∬|(x-y)|^2 |(x)| |(y)| × × a_x a_y ^-1/2/+s1/+sω_2 ≤∫_0^∞ṣ√(s)( ∬|(x-y)|^2+ |(x)|^2 |(y)|^2)^1/2× ×( ∬x̣ỵ |w_N(x-y)|^2- a_x a_y ^-1/2/+s^2 )^1/21/+sω_2 . By Lemma <ref>, we obtain ∬|(x-y)|^2+ |(x)|^2 |(y)|^2 ≤u_N_L^∞^2 u_N_L^2^2 |w_N|^2+_L^1 ≤ C_t N^2β (1+). Moreover, using that ∬x̣ỵ |w_N(x-y)|^2-_x _y a_x a_y = _2(|w_N|^2-)≤ (_2(|w_N|))^2-≤^2- by Lemma <ref>, we can bound ∬x̣ỵ |w_N(x-y)|^2- a_x a_y ^-1/2/+s^2 = ⟨^-1/2/+s , _2(|w_N|^2-) ^-1/2/+s⟩ ≤⟨, ^1-/(+s)^2⟩≤1/(1+s)^1+. In the last estimate we used that ≥ 1. Moreover, using again the fact that commutes with ω_2, we find with (<ref>) that 1/+sω_2^2 ≤C/M^2 (1+s)^2⟨, 𝕀^>M/2⟩ ≤C/M^2 (1+s)^2⟨, 2/M⟩≤C_t,/M^3 (1+s)^2 N^. Here in the last estimate, we used the kinetic bound in Lemma <ref>. Inserting (<ref>), (<ref>) and (<ref>) in (<ref>) we find that, for every constant ∈(0,1), _2,1 ≤ C_t,∫_0^∞ṣ√(s)√(N^2β(1+))√(1/(1+s)^1+)√(1/M^3 (1+s)^2 N^) ≤ C_t,N^(1+)β+/2/M^3/2∫_0^∞ṣ/(1+s)^1+/2≤ C_t,N^(1+)β+/2/M^3/2 . Estimate of _2,2. Similarly, for every constant >0 small, by the Cauchy–Schwarz inequality, _2,2 =∫_0^∞ṣ√(s)∭ẓ |(x-y)| |w_N(x-z)| |(x)| |(y)| × ×|, ^-1/2 (+1)^-1/2/+s_x_y _z a_z (+3)^1/2/+sω_2| ≤_L^∞^2 ∫_0^∞ṣ√(s)∭ẓ |(x-y)| |w_N(x-z)| × × a_x a_y a_z ^-1/2(+1)^-1/2/+s a_z (+3)^1/2/+sω_2 ≤ C_t ∫_0^∞ṣ√(s)( ∭x̣ỵẓ |w_N(x-y)|^2- a_x a_y a_z ^-1/2(+1)^-1/2/+s^2 )^1/2 ×( ∫ a_z (+3)^1/2/+sω_2 ^2∫ |w_N(x-z)|^2∫ |w_N(x-y)|^)^1/2. In the last estimate, we used the uniform bound _L^∞≤ C_t from Lemma <ref>. Using again (<ref>) and ≥ 1 we find that ∭x̣ỵẓ |w_N(x-y)|^2- a_x a_y a_z ^-1/2(+1)^-1/2/+s^2 = ⟨^-1/2(+1)^-1/2/+s , _2 (|w_N|^2-) ^-1/2(+1)^-1/2/+s⟩ ≤⟨ , ^1-/(+s)^2⟩≤1/(1+s)^1+. Since w is bounded and compactly supported, we get ∫x̣ |w_N(x-z)|^2∫ỵ |w_N(x-y)|^≤ C N^2β. Moreover, using (<ref>) together with ^2 ≤ M _1(1-Δ) on ^≤ M and Lemma <ref>, we have ∫ẓ a_z (+3)^1/2/+sω_2^2 = C/M^2⟨, (+3)/(+s)^2⟩ ≤C/M^2(1+s)^2,^2≤C_t, N^/M (1+s)^2 . Therefore, we deduce from (<ref>) that _2,2≤ C_t,∫_0^∞ṣ√(s)√(1/(1+s)^1+)√(N^/M (1+s)^2 N^2β)≤ C_t,N^(β+1/2)/√(M) . Putting (<ref>) and (<ref>) together, we conclude from (<ref>) that _2≤ C_t,( N^(1+)β+/2/M^3/2 + N^(β+1/2)/√(M)) . Conclusion of (<ref>): Inserting (<ref>) and (<ref>) in (<ref>), we obtain (<ref>). Step 4. It remains to prove (<ref>). Similarly to (<ref>), we can write ,[_2^*,^2] = ∬(x-y) (x) (y)⟨, a_x a_y ω̃_2 Φ_N,M⟩ with ω̃_2=√((N- + 2)(N-+1))/N-1( f^2(-2/M) - f^2 (/M) ) as operator on . This term is much easier to estimate than (<ref>) since now two annihilators hit Φ_N,M. To be precise, we have |ω̃_2|≤C/M similarly to (<ref>). Therefore, by the Cauchy–Schwarz inequality, | ,[_2^*,^2]| ≤∬ |(x-y)| |(x)| |(y) | a_x a_y ω̃_2 Φ_N,M ≤( ∬ |(x-y)| |(x)|^2 |(y) |^2 )^1/2× ×( ∬ |(x-y)| a_x a_y ω̃_2 Φ_N,M^2 )^1/2 ≤ C_t ⟨Φ_N,M, _2(|w_N|) |ω̃_2|^2 Φ_N,M⟩^1/2 ≤C_t,/M^2⟨Φ_N,M, N^_1(1-Δ) Φ_N,M⟩^1/2≤ C_t,N^/M . Here we used again (<ref>), Lemma <ref> and the kinetic estimate in Lemma <ref>. Thus, (<ref>) holds true. This completes the proof of Lemma <ref>. §.§ Estimate of the cubic terms Concerning the cubic terms in (<ref>), we have the following bounds: Let ∈_(t), t∈[0,) and >0. Then | ,[_3,^2]| ≤ C_t,( 1/√(N) + N^β/M√(N)) N^ , | ,[_3^*,^2]| ≤ C_t,N^/√(N) . Again, the bound (<ref>) is much more difficult than (<ref>). We will proceed similarly to the quadratic terms. Step 1. Analogously to (<ref>), we denote ω_3=√(1--1/N-1)( f^2(/M) - f^2(+1/M) ) as operator on , which satisfies |ω_3|≤C/M𝕀^≤ M. Moreover, similarly to (<ref>) we can write ,[_3,^2] = 1/√(N)∬(x-y) (x)⟨, _x _y a_y ω_3 Φ_N,M⟩. By decomposing _x _y a_y = ^-1/2_x _y a_y ^1/2 + ^-1/2[_x _y a_y, ^1/2] , we obtain | ,[_3,^2]| ≤ℰ_3+ ℰ_4 , where ℰ_3 = 1/√(N)∬ |(x-y)| |(x)| |⟨, ^-1/2_x _y a_y ^1/2ω_3 Φ_N,M⟩|, ℰ_4 = 1/√(N)∬ |(x-y)| |(x)| |⟨, ^-1/2[_x _y a_y, ^1/2] ω_3 Φ_N,M⟩|. Step 2. Let us first estimate _3. By the Cauchy–Schwarz inequality, ℰ_3 ≤1/√(N)∬ |(x-y)| |(x)| a_x a_y ^-1/2 a_y ^1/2ω_3 Φ_N,M ≤_L^∞/√(N)(∬|(x-y)| a_x a_y ^-1/2^2 )^1/2× ×(∬x̣ỵ |w_N(x-y)| a_y^1/2ω_3 ^2)^1/2. We can simplify the right-hand side using (<ref>) and Lemma <ref>. Moreover, by (<ref>) and Lemma <ref>, we have |ω_3|^2 ≤C_ N^/M^2^2 _1(1-Δ)≤ C_ N^_1(1-Δ) on ^≤ M. Combining this with the kinetic bound in Lemma <ref>, we find that ∬x̣ỵ |w_N(x-y)| a_y^1/2ω_3 ^2 = w_N_L^1⟨, |ω_3|^2 ⟩ ≤ C_ N^2 for every constant >0. Therefore, we deduce from (<ref>) that ℰ_3 ≤C_t, N^/√(N) . Step 3. Now we turn to the complicated error term _4. A direct computation shows that [(||),_x_ya_y]=|(x-y)|_x_ya_y+∫|(x-z)|_x_y_z a_za_y , and with (<ref>) this yields [^1/2,_x_y a_y ] =1/π∫_0^∞ √(s)/+s |(x-y)|_x_y a_y 1/+s +1/π∫_0^∞∫ẓ √(s)/+s |(x-z)| _x_y_z a_za_y 1/+s. Thus, by the triangle inequality and the bound _L^∞≤ C_t from Lemma <ref>, we can split ℰ_4 = 1/√(N)∬ |(x-y)| |(x)| |⟨, ^-1/2[_x _y a_y, ^1/2] ω_3 Φ_N,M⟩| ≤ C_t (_4,1 + _4,2) , where _4,1 = 1/√(N)∫_0^∞ṣ√(s)∬ |(x-y)|^2 × ×|⟨, ^-1/2/+s_x _y a_y 1/+sω_3 Φ_N,M⟩| _4,2 = 1/√(N)∫_0^∞ṣ√(s)∭ẓ |(x-y)| |(x-z)| × ×|⟨, ^-1/2/+s_x _y _z a_z a_y 1/+sω_3 Φ_N,M⟩|. Estimate of _4,1. By the Cauchy–Schwarz inequality we have _4,1 ≤1/√(N)∫_0^∞ṣ√(s)∬ |(x-y)|^2 a_x a_y ^-1/2/+s a_y 1/+sω_3 Φ_N,M ≤1/√(N)∫_0^∞ṣ√(s)( ∬ |(x-y)|^2- a_x a_y ^-1/2/+s^2 )^1/2× ×(∬x̣ỵ |w_N(x-y)|^2+ a_y 1/+sω_3 Φ_N,M^2 )^1/2. The right-hand side can be simplified using (<ref>) and the estimate ∬x̣ỵ |w_N(x-y)|^2+ a_y 1/+sω_3 Φ_N,M^2 = |w_N|^2+_L^1⟨Φ_N,M, |ω_3|^2/(+s)^2Φ_N,M⟩≤ C_t,N^(1+)2βN^/M^2 (1+s)^2 , which follows from (<ref>), ≥ 1, and the kinetic bound in Lemma <ref>. Altogether, this gives _4,1 ≤C_t,/√(N)∫_0^∞ṣ√(s)√(1/(1+s)^1+)√( N^(1+)2βN^/M^2 (1+s)^2) ≤ C_t,N^(1+)β + /2/√(N)M. Estimate of _4,2. By the Cauchy–Schwarz inequality, _4,2 = 1/√(N)∫_0^∞ṣ√(s)∭ẓ |(x-y)| |(x-z)| × ×|⟨, ^-1/2 (+2)^-1/2/+s_x _y _z a_z a_y (+3)^1/2/+sω_3 Φ_N,M⟩| ≤1/√(N)∫_0^∞ṣ√(s)∭ẓ |(x-y)| |(x-z)| × × a_x a_y a_z ^-1/2(+2)^-1/2/+s a_y a_z (+3)^1/2/+sω_3 Φ_N,M ≤1/√(N)∫_0^∞ṣ√(s)× ×( ∭x̣ỵẓ |(x-y)|^2- a_x a_y a_z ^-1/2 (+2)^-1/2/+s^2 )^1/2 ×( ∭x̣ỵẓ |(x-y)|^ |(x-z)|^2 a_y a_z (+3)^1/2/+sω_3 Φ_N,M^2 )^1/2. We can bound ∭x̣ỵẓ |(x-y)|^2- a_x a_y a_z ^-1/2 (+2)^-1/2/+s^2 = ∬x̣ỵ |(x-y)|^2- a_x a_y ^-1/2/+s^2 ≤1/(1+s)^1+ as in (<ref>). Since w is bounded and compactly supported, we have the pointwise estimate |(x-y)|^ |(x-z)|^2 =|(x-y)|^ |(x-z)|^2 𝕀_{|y-z|≤ CN^-β} ≤ C N^4β |(x-y)|^𝕀_{|y-z|≤ CN^-β} . Moreover, note that the operators _2 ( 𝕀_{|y-z|≤ CN^-β}), , and ω_3 all commute. Consequently, using ≥ 1 and (<ref>), we can bound ∬ỵẓ𝕀_{|y-z|≤ CN^-β} a_y a_z (+3)^1/2/+sω_3 Φ_N,M^2 = ⟨Φ_N,M , _2 ( 𝕀_{|y-z|≤ CN^-β}) +3/(+s)^2|ω_3|^2 Φ_N,M⟩ ≤C/M(1+s)^2⟨Φ_N,M, _2 ( 𝕀_{|y-z|≤ CN^-β}) Φ_N,M⟩. Using Lemma <ref> with s=2β/(2β-ε), we obtain _2 ( 𝕀_{|y-z|≤ CN^-β}) ≤ C_ N^ N^-2β_1(1-Δ) for every >0. Therefore, together with Lemma <ref>, we deduce that ∭x̣ỵẓ |(x-y)|^ |(x-z)|^2 a_y a_z (+3)^1/2/+sω_3 Φ_N,M^2 ≤CN^4β/M(1+s)^2 |w_N|^_L^1⟨Φ_N,M, _2 ( 𝕀_{|y-z|≤ CN^-β}) Φ_N,M⟩ ≤C_t, N^(2β+2) /(1+s)^2. Inserting (<ref>) and (<ref>) in (<ref>) we find that _4,2 = C_t,/√(N)∫_0^∞ṣ√(s)√(1/(1+s)^1+)√(N^(2β+2) /(1+s)^2)≤C_t,N^(β+1)/√(N) for every constant >0. From (<ref>) and (<ref>) we get _4≤ C_t,( N^β/√(N)M + 1/√(N)) N^ . Conclusion of (<ref>): Given the decomposition (<ref>), the desired bound (<ref>) follows immediately from (<ref>) and (<ref>). Step 4. It remains to prove (<ref>). Similarly to (<ref>), we can write ,[_3^*,^2] = 1/√(N)∬(x-y) (x)⟨, _y a_x a_y ω̃_3 Φ_N,M⟩ with ω̃_3=√(1-/N-1)( f^2(-1/M) - f^2(/M) ) as operator on , which satisfies |ω̃_3|≤C/M𝕀^≤ M. By the Cauchy–Schwarz inequality, |,[_3^*,^2]| = 1/√(N)| ∬(x-y) (x)⟨ (+1)^-1/2, _y a_x a_y ^1/2ω̃_3 Φ_N,M⟩| ≤_L^∞/√(N)∬ |(x-y)| a_y (+1)^-1/2 a_x a_y ^1/2ω̃_3 Φ_N,M ≤C_t/√(N)( ∬ |(x-y)| a_y (+1)^-1/2 ^2 )^1/2 ×( ∬ |(x-y)| a_x a_y ^1/2ω̃_3 Φ_N,M^2 )^1/2 = C_t/√(N)⟨, w_N_L^1 (+1)^-1⟩^1/2⟨Φ_N,M, _2(|w_N|) |ω̃_3|^2 Φ_N,M⟩^1/2 ≤C_t,N^/√(N)⟨Φ_N,M, _1(1-Δ) ^2 ^≤ M M^-2Φ_N,M⟩^1/2≤C_t,/√(N) N^2 , where we used Lemma <ref> and the kinetic bound in Lemma <ref>. This concludes the proof of (<ref>) and thus of Lemma <ref>. §.§ Conclusion of Proposition <ref> First, inserting the bounds from Lemmas <ref>, <ref> and <ref> in (<ref>), and using M≤ N to simplify some error terms, we find that the desired propagation bound (<ref>) holds true, namely that |/ṭ(t),f_M^2(t)|≤ C_t,εN^ε(1/√(M)+N^β/M^3/2) . Now we are ready to give Define (t):=1-(t),^2(t). By Assumption (<ref>) and by definition (<ref>) of , we obtain (0) = Φ_0, (1-^2) Φ_0≤Φ_0,𝕀^>M/2Φ_0≤2/MΦ_0,Φ_0≤C/M . Combining this with (<ref>), we can therefore bound (t) ≤ C_t,( 1/√(M) + N^β/M^3/2) for all t∈ [0,) and >0 by Gronwall's lemma. To conclude Proposition <ref>, we prove that (t)-(t)^2≤ 4(t) . Let us drop the time dependence from the notation for simplicity and write -^2 = Φ_N ^2 + ^2 - 2⟨Φ_N, Φ_N,M⟩ ≤ 2 - 2⟨Φ_N, f_M^2 Φ_N,M⟩ - 2⟨Φ_N(t), g_M^2 Φ_N,M⟩. Here we denoted g_M^2=1-f_M^2 and used that Φ_N≤ 1, ≤ 1. Moreover, by the Cauchy–Schwarz inequality, 2| ⟨Φ_N, g_M^2 Φ_N,M⟩| ≤g_M Φ_N^2 + g_M Φ_N,M^2 ≤ 2 - f_M Φ_N^2 - f_M Φ_N,M^2 ≤ 2 - 2 | ⟨Φ_N, f_M^2 Φ_N,M⟩|. Thus, (<ref>) follows immediately. The proof of Proposition <ref> is complete. § CONCLUSION OF THE MAIN THEOREMS §.§ Proof of Theorem <ref> Let M=N^1-δ with δ∈ (0,1). Recall that Φ_N(t) and Φ_N,M(t) are defined in (<ref>) and (<ref>), respectively. Since U_N: ^N →(t) is a unitary transformation, the desired norm approximation (<ref>) is equivalent to Φ_N(t) - Φ(t) ^2 ≤ C_t N^-α_2. By Lemma <ref> and Proposition <ref>, we can bound (t)-Φ(t) ^2 ≤ 2 (t)-Φ_N,M(t) ^2 + 2Φ_N,M(t)-Φ(t) ^2 ≤ C_t, N^(1/√(M)+N^β/M^3/2 + √(M/N)) =C_t, N^(N^δ-1/2+ N^3δ/2+β-3/2+N^-δ/2) for all t∈[0,) and >0. Here we have put back M=N^1-δ at the end. The optimal choice for δ is δ=3-2β/4 if β≥1/2 1/2 if β≤1/2 which implies (<ref>) with every 0<α_2<min(1/8, (3-2β)/16) . §.§ Proof of Theorem <ref> The implication of the convergence of density matrices from the norm convergence is well-known, see e.g. <cit.>. Here we recall a quick derivation for the reader's convenience. We will again drop the time dependence from the notation. Let q = 1- p= 1- |⟩⟨| as in (<ref>). By using the rules (<ref>) (see also Remark <ref>), Theorem <ref> and Lemma <ref>, it follows that N(q γ_Ψ_N^(1) q)= √()Φ_N^2 ≤ 2√()^≤ N ( Φ_N - Φ) ^2 +2√()Φ^2 ≤ C_t,( N^1-2α_2 + N^). Then by the triangle and Cauchy–Schwarz inequalities, we conclude that |γ_Ψ_1^(1)-|φ⟩⟨φ|| ≤| p- |φ⟩⟨φ| | + | p(γ_Ψ_1^(1)-1)p| + |q γ_Ψ_1^(1) q | + 2 |p γ_Ψ_1^(1) q| ≤ 2 u_N -φ_L^2 +2 (qγ_Ψ_1^(1) q) + 2 √( | qγ_Ψ_1^(1) q|)√( | pγ_Ψ_1^(1) p|) ≤ C_t, (N^-β+ N^-α_2 + N^). Here we used (p)= γ_Ψ_N^(1)=1 and Lemma <ref>. Thus (<ref>) holds for α_1=min(β,α_2). The proof of Theorem <ref> is complete. Acknowledgements. We would like to thank Kihyun Kim for helpful discussions. L. Boßmann was supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) via the Munich Center of Quantum Science and Technology (Germany's Excellence Strategy EXC-2111-390814868). C. Dietze and P.T. Nam were supported by the DFG project “Mathematics of many-body quantum systems” (project No. 426365943). C. Dietze also acknowledges the partial support from the Jean-Paul Gimon Fund and from the Erasmus+ programme. siam
http://arxiv.org/abs/2307.02521v1
20230705180000
Probing factorization violation with vector angularities
[ "Pim Bijl", "Steven Niedenzu", "Wouter J. Waalewijn" ]
hep-ph
[ "hep-ph", "nucl-th" ]
Nikhef, Science Park 105, 1098 XG, Amsterdam, The Netherlands Nikhef, Science Park 105, 1098 XG, Amsterdam, The Netherlands Nikhef, Science Park 105, 1098 XG, Amsterdam, The Netherlands Institute for Theoretical Physics Amsterdam and Delta Institute for Theoretical Physics, University of Amsterdam, Science Park 904, 1098 XH Amsterdam, The Netherlands Factorization underlies all predictions at the Large Hadron Collider (LHC), but has only been rigorously proven in a few cases. One of these cases is the Drell-Yan process, pp → Z/γ + X, in the limit of small boson transverse momentum. We introduce a one-parameter family of observables, that we call vector angularities, of which the transverse momentum is a special case. This enables the study of factorization violation, with a smooth transition to the limit for which factorization has been established. Like the angularity event shapes, vector angularities are a sum of transverse momenta weighted by rapidity, but crucially this is a vector sum rather than a sum of the magnitude of transverse momenta. We study these observables in , using the effect of multi-parton interactions (MPI) as a proxy factorization violation, finding a negligible effect in the case where factorization is established but sizable effects away from it. We also present a factorization formula for the cross section, that does not include factorization violating contributions from Glauber gluons, and thus offers a baseline for studying factorization violation experimentally using vector angularities. Our predictions at next-to-leading logarithmic accuracy (NLL') are in good in agreement with (not including MPI), and can be extended to higher order. Probing factorization violation with vector angularities Wouter J. Waalewijn August 1, 2023 ======================================================== § INTRODUCTION All predictions for scattering processes at the Large Hadron Collider (LHC), rely on factorization. Factorization allows one to write the cross section σ for a given process and measurement as some convolution of various ingredients. At the LHC this is essential to separate the perturbatively-calculable partonic cross section σ̂, from the nonperturbative dynamics of the incoming protons, described by parton distribution functions f. For example, for the Drell-Yan process pp →/Z + X, this takes the form / Q Y = ∑_i,j∫ x_1 f_i(x_1,μ) ∫ x_2 f_j(x_2,μ) ×_ij/ Q Y(x_1,x_2,μ) where the sum on i,j=g, u, u̅, d, … runs over all parton flavors, whose momentum fractions x_1,2 are integrated over, with Q and Y the invariant mass and rapidity of the vector boson. When measurements significantly restrict the QCD radiation in the final state, the factorization becomes more involved. For example, if in the Drell-Yan process the transverse momentum of the boson is measured to be small compared to Q, this implies that hadronic radiation must be soft (low-energetic) or collinear (parallel to one of the incoming protons). In this case, the factorization involves transverse-momentum-dependent parton distributions. While factorization is proven for the Drell-Yan process <cit.>, it is used for generic LHC processes. However, a crucial and nontrivial step to establish factorization involves showing that the Glauber region (or Glauber modes <cit.> in Soft-Collinear Effective Theory <cit.>) does not give a non-trivial contribution.[We write “non-trivial", since Ref. <cit.> shows that certain “Cheshire" Glauber contributions can simply be accounted for by using the proper orientation of Wilson lines.] There has been progress in understanding the origin of this factorization violation, finding e.g. that for single-scale observables such contributions necessarily involve a Lipatov vertex <cit.>. A concrete example of factorization violation was presented in Ref. <cit.>. In this paper we introduce a family of observables, which we call vector angularities. Though we focus on the Drell-Yan process, these observables can be applied to other processes in which a color-singlet is produced. The vector angularities are defined as τ⃗_a = ∑_i k⃗_⊥,i e^-a|y_i|, where the sum on i runs over the hadronic final state, and the transverse momentum k⃗_⊥, i of particle i is weighed by its rapidity y_i, in accordance to the choice of the parameter a. This is similar to the angularity event shape <cit.>, which has been extended to Drell-Yan <cit.>, deep-inelastic scattering <cit.> and jets <cit.>. A crucial difference is that we take the vector sum of transverse momenta. For the special case of τ⃗_0, this corresponds to the transverse momentum of the hadronic final state, and thus the transverse momentum of the boson, by momentum conservation. In this case it has been shown that the factorization violating contributions from the Glauber region can indeed be ignored. Our family of observables therefore allows one to explore possible factorization violating effects in a way that lets one smoothly turn them off as a → 0. The effect of Glauber gluons has been connected to multiple-parton interactions (MPI) <cit.>, i.e. multiple partonic collisions between the same pair of colliding protons. As in Ref. <cit.>, we therefore study the effect of MPI in  <cit.> on our vector angularities to get a first impression of possible factorization violation. We will also present a factorization formula for the Drell-Yan cross section differential in τ⃗_a, assuming the absence of factorization violating effects. Interestingly, this vector-type observable does not involve rapidity divergences, and it is another example where resummation needs to be carried out in the conjugate space <cit.>. We obtain resummed predictions at next-to-leading logarithmic (NLL') accuracy, which are in agreement with (without MPI). Though the accuracy of these predictions is limited, they can in principle be substantially improved by including higher order corrections. Indeed, for τ⃗_0 results for Drell-Yan have even been obtained at N^4LL accuracy! <cit.> Such resummed calculations would provide a baseline for studying factorization violation experimentally, using vector angularities. The outline of this paper is as follows: In pythia we show numerical results from for the vector angularities, to explore their sensitivity to factorization violation using its MPI model. Our factorization formula and resummed results are presented in calculation, with the perturbative ingredients relegated to ingredients. We conclude in conclusions § PYTHIA AND UNDERLYING EVENT In this section we study the effect of multi-parton interactions (MPI) in the Monte Carlo event generator to assess the sensitivity of vector angularities τ⃗_a to factorization violation. In particular, we are interested in studying the dependence on a, knowing that for a=0 factorization violation effects should be absent. We simulate proton-proton collisions in with a center-of-mass energy of 13 TeV. One parton from each colliding proton engages in a hard scattering process to produce a Z boson/photon (the Drell-Yan process). For definiteness, we set the subsequent decay of this boson to an electron-positron pair. There can be additional interactions between the remaining partons in the protons. These MPI lead to additional radiation, i.e. on top of that emitted in the production of a Z boson/photon. We will furthermore always include initial-state radiation, and explore the effect of hadronization by turning it on/off. For each simulated event, the vector angularity τ⃗_a in vector_angularities is calculated by summing over the contribution from each final-state particle, except for the decay products of the boson. Due to the fact that other processes besides the boson decay can also create electrons or positrons, we need to select the proper final-state particles to calculate the vector angularities. Events are selected based on the requirement that exactly one electron and one positron have a transverse momentum of k_⊥≥ 2.5 GeV, which we assume to be from the boson decay and are not included in the calculation of τ⃗_a. Only 2 percent of the events are discarded due to this cut. The simulations are run for three scenarios: First we have MPI and hadronization turned off, which can be compared to our analytical calculation. Second, MPI and hadronization are both on, which should be representative of measurements of the LHC, though a full study would require including backgrounds and detector simulations. Third, hadronization is kept on but MPI is turned off to assess the size of factorization violating effects. For each simulation scenario, different values of the parameter a were considered, ranging from a = -0.25 to a = 1. This range was chosen to explore the vicinity of a=0, where the effect of MPI is expected to be negligible. Furthermore, for a ≤ -1, we would run into problems of IR safety, while for very large values of a only radiation at extremely central rapidities would be probed. We generate 100.000 events and calculate the vector angularities for each value of a. Because the angle of τ⃗_a is irrelevant, we show results for |τ⃗_a|. The results are shown in pythia, where we choose a = -0.25, 0, 0.5 and 1 as representative values. First we note by comparing the curves with/without hadronization, that its effect is rather mild. (Though we also explored more negative values of a, such as a=-1, where hadronization effects are large.) For a = 0, MPI clearly have very little effect on the vector angularities. This is in agreement with the expectation for τ⃗_0, for which factorization-violating effects from Glauber gluons are known to be absent. For the other values of a, there is a more pronounced difference between the distribution that includes MPI and those that do not, and this difference increases as a is further from 0. Based on this we conclude that the vector angularities are indeed sensitive to MPI, suggesting their use as a probe of factorization violation. § RESUMMED CALCULATION §.§ Factorization Using the framework of SCET, we factorize the cross section for small |τ⃗_a| into the following ingredients / Q Y ^2 τ⃗_a = ∑_q _0,q H(Q^2,μ) ∫^2 τ⃗_a,1' B_q(τ⃗_a,1',x_1,μ) ∫^2 τ⃗_a,2' B_q̅(τ⃗_a,2',x_2,μ) S(τ⃗_a - τ⃗_a,1'/(Qe^Y)^a - τ⃗_a,2'/(Qe^-Y)^a,μ) = ∑_q _0,q H(Q^2,μ) ∫^2 b⃗_⊥/(2 π)^2 e^-τ⃗_a ·b⃗_⊥B̃_q(b⃗_⊥/(Qe^Y)^a,x_1,μ) B̃_q̅(b⃗_⊥/(Qe^-Y)^a,x_2,μ) S̃(b⃗_⊥,μ) , where we obtained the second line by performing a Fourier transform. Our convention for the Fourier transform is given in fourier, and we include a tilde to indicate that the functions have been transformed. The sum on q=u, u̅, d, … runs over all (anti-)quark flavors, and the momentum fractions x_1 = Qe^Y/E_cm, x_2 = Qe^-Y/E_cm are fixed by the invariant mass Q and rapidity Y of the vector boson, and the center of mass energy E_cm of the collision. The Born cross section is given by _0,q = 8π_em^2/3N_c Q E_cm^2 ×[ Q_q^2 +(v_q^2 + a_q^2) (v_ℓ^2+a_ℓ^2) - 2 Q_q v_q v_ℓ (1-m_Z^2/q^2)/(1-m_Z^2/q^2)^2 + m_Z^2 Γ_Z^2/q^4], where _em is the electromagnetic coupling, N_c=3 is the number of colors, Q_q is the electric charge of the quark, v_ℓ,q and a_ℓ,q are the standard vector and axial couplings of the leptons and quarks, and m_Z and Γ_Z are the mass and width of the Z boson. The hard function H describes the short-distance collision of an incoming quark and anti-quark that produce the Z/γ, and includes virtual hard corrections. Real hard radiation is not possible for small |τ⃗_a|, and so the hard function is independent of the vector angularity measurement. B_q and B_q̅ are the beam functions of the incoming (anti-)quark, which include the PDFs and initial-state radiation coming from the extracted partons <cit.>. At next-to-leading order, this includes a contribution from the gluon PDF, where the extracted gluon splits into a quark and anti-quark pair, with one of them entering the hard interaction and the other going into the final state. The contribution of soft radiation to the measurement, emitted by the incoming partons, is encoded in the soft function S. The hard function has been calculated a long time ago <cit.>. We have calculated the one-loop beam and soft function, and present our results in pert. As these functions contain at most one real emission, there is no difference between a vector or scalar sum, allowing us to validate our results with the known angularity soft <cit.> and beam function <cit.>. Of course the difference between vector and scalar sum does lead to distinct features for the resummation, since this encodes the dominant effect of multiple emissions. §.§ Resummation To perform the resummation, we evaluate the hard, beam and soft functions at their natural scale and use the renormalization group to evolve them to a common scale. In the case of transverse momentum resummation, performing the resummation directly in momentum space is challenging (see e.g. Refs. <cit.>), so we will also switch to Fourier space for τ⃗_a. The resummed Drell-Yan cross section differential in Q, Y and |τ⃗_a| is given by / Q Y |τ⃗_a| = ∑_q _0,q H(Q^2,μ_H) ∫_0^∞ b_⊥ b_⊥ |τ⃗_a| J_0(b_⊥ |τ⃗_a|) ×B̃_q(b_⊥^*/(Qe^Y)^a,x_1,μ_B) B̃_q̅(b_⊥^*/(Qe^-Y)^a,x_2,μ_B) ×S̃(b_⊥^*,μ_S) U_H(Q^2,μ_H,μ_B) U_S(b_⊥^*, μ_S, μ_B, a) . We have now written the cross section differential in |τ⃗_a|, and indicated that the soft- and beam functions depend on b_⊥≡ |b⃗_⊥| and not the angle of b⃗_⊥ (due to azimuthal symmetry). Compared to factorizedcross, we have included the evolution kernels U_H and U_S of the hard and soft function, that we use to evolve them from their natural scale μ_H and μ_S to the beam scale μ_B. The expressions for the renormalization group equations and evolution kernels are given in resum, and the natural scales μ_H, μ_B and μ_S are discussed below. Finally, the star in b_⊥^* indicates a prescription to avoid the Landau pole, which also enters through the scales μ_B and μ_S, see bstar. The natural scales of the hard, beam and soft function are those for which they do not contain large logarithms. As the hard function contains logarithms of Q^2/μ_H^2, see hard, the natural scale is μ_H = Q. In the soft and beam function the large logarithms are L_b and L_b' in LbLbp. We simply choose μ_S and μ_B such that L_b = 0 and L_b^' = 0, respectively. If we strictly follow this procedure, μ_B would depend on Y, so we instead choose to use the scale obtained in this manner for Y=0. The natural scales are then given by μ_H = Q, μ_S = 2e^-γ_E/ b_⊥^*, μ_B = Q^a/1+a(2e^-γ_E/b_⊥^*)^1/1+a. The uncertainties on the cross section are determined by varying the hard, beam and soft scales up and down. For the hard scale this is a factor of two, i.e. we take μ_H=Q for the central curve and consider μ_H=Q/2 and μ_H=2 Q to estimate the perturbative uncertainty. For the soft scale we also take a factor of 2 but simultaneously vary μ_B in order to maintain μ_B^1+a = μ_H^aμ_S. The μ_S variation dominates our scale uncertainty, and we symmetrize the resulting uncertainty band. To avoid the Landau singularity, we regularize the nonperturbative region at large b_⊥ in final_cross by using a “b-star" prescription <cit.>, b^*_⊥=b_⊥/√(1+ b_⊥^2/b_ max^2) . This ensures that b^*_⊥→ b_ max when b_⊥→∞. We determine an appropriate value for b_ max by requiring that the scales μ_H, μ_B and μ_S stay above the minimum value μ_0=0.8. Since the beam scale depends on the parameter a, the expressions for b_ max will depend on the value of a that is considered: For a ≥ 0, μ_S is the smallest scale, while for a < 0, μ_B is the smallest scale. This leads to b_ max(a) = 2 e^-γ_E/μ_0(Q/μ_0)^min(a,0) . For our normalized distributions, the integration over Q is numerically irrelevant (percent-level effect). Performing the Y integral changes the shape of the τ⃗_a distribution by less than 10% compared to using Y=0, and is extrapolated from the LL cross section using: ∫ Y _ NLL'/ Y ^2 τ⃗_a = [_ NLL'/ Y ^2 τ⃗_a/ _ LL/ Y ^2 τ⃗_a]_Y=0 ×∫ Y _ LL/ Y ^2 τ⃗_a . Finally, we note that for large b, b_⊥^* → b_ max and the cross section in b approaches a constant. A constant in b-space transform to a delta function in τ⃗_a, and we therefore subtract of this contribution to improve the numerical stability. §.§ Numerical results We have implemented our resummation in final_cross at leading logarithmic (LL) and next-to-leading logarithmic (NLL') order. The former only involves the tree-level hard, beam and soft function as well as the lowest order cusp anomalous dimension _0 and running coupling (β_0). At NLL' we include all ingredients in ingredients, and consistently expand the cross section, e.g. dropping cross terms involving a one-loop beam and one-loop soft function. We do not include the matching to next-to-leading order (NLO) cross section, so our results become less reliable for large values of |τ⃗_a|. For this reason, we normalize the distribution on the plotted interval. For the PDFs we use MSTW2008 at NLO <cit.> with α_s(M_Z) = 0.12. Our results for a = -0.25, 0, 0.5, 1 at LL and NLL' are shown in results. The bands indicate the perturbative uncertainty and are obtained by scale variations, as discussed in resummation. We apply the same factor for the scale variations as was used to normalize the central curve, instead of separately normalizing each of the scale variations. This makes our uncertainty estimate more conservative. Formally our expressions diverge at a=0, so in this case we take the limit numerically by setting a=0.01. This yields a reasonable result at LL, but the NLL' curve is unstable and therefore omitted. For the other values of a, the LL and NLL' uncertainty bands tend to overlap, except for larger values of |τ⃗_a|. There, our results are anyway less reliable because our factorization formula does not account for power corrections of 𝒪(|τ⃗_a|^2/Q^2), which could be remedied by matching to the NLO cross section. For a=0.5 and 1, the uncertainties at NLL' are smaller than at LL, indicating convergence. For a = -0.25, the (relative) uncertainties at LL and NLL' are very similar. This is in line with the results in Ref. <cit.> for angularities in jets, see their fig. 7, where their β corresponds to a+1. Finally, these plots also include the results at parton level, without MPI, which are in agreement with our calculation. Since contains NLL' ingredients it is not surprising they are closer to NLL' than LL. We note that our calculation is of course only a first step, and expect substantial improvement at higher orders in perturbation theory. In particular, the soft function needed at NNLL'+NNLO can be obtained using SoftSERVE <cit.>, and there are ongoing efforts to automate the beam function calculation at this order <cit.>. § CONCLUSIONS In this work we proposed a new one-parameter family of hadron collider observables, called vector angularities, that can be used to study the effects of factorization violation. When the parameter a=0, factorization has been established and these effects are absent. Exploring factorization violation, using the MPI model of as a proxy, we found agreement with the absence of these effects for a=0, while they grow for values of a away from 0. We also explored the effect of hadronization in , which is tiny in comparison to MPI. We then presented a factorization formula for the vector angularity cross section, assuming the absence of factorization violation. This would provide a baseline for studying these effects at the LHC. Our numerical results at LL and NLL' were in agreement with (without MPI), but still have large uncertainties. Calculating higher orders will certainly reduce this, and due to the development of automated tools, NNLL'+NNLO should soon be within reach. Indeed, for the special case of a=0, N^4LL results have already been obtained. Finally, it would also be very interesting to attempt a direct calculation of the contribution from Glauber gluon exchanges, using the formalism of Ref. <cit.>. We thank D. Neill for discussions. This work is supported by the D-ITP consortium, a program of NWO that is funded by the Dutch Ministry of Education, Culture and Science (OCW). § We define our Fourier transformation as follows: f̃(b⃗_⊥) = ∫^2 τ⃗_a e^τ⃗_a ·b⃗_⊥ f(τ⃗_a) . We now present the expressions for the ingredients in the factorization up to next-to-leading order, as well as the renormalization group equations and anomalous dimensions needed for NLL' resummation. The MS scheme is employed through out. §.§ Perturbative ingredients The renormalized hard function is given by H(Q^2,μ) = 1+ α_s C_F/2 π(-ln^2(Q^2/μ^2) + 3ln(Q^2/μ^2) -8 + 7π^2/6) + 𝒪(α_s^2) . The renormalized soft function is given by S(b,μ) = 1 + α_s C_F/2π1/a(-L_b^2-π^2/6) + 𝒪(α_s^2) , where L_b ≡ln(b⃗_⊥^2 μ^2 e^2_E/4) . The renormalized beam functions are matched onto PDFs using B̃_q(b⃗_⊥',x,μ) = ∑_j ∫ x'/x'ℐ̃_qj(b⃗_⊥',x',μ) f_j(x/x',μ) with matching coefficients[We obtain a different coefficient for the π^2 (1-x) term in ℐ̃_qq than Ref. <cit.>, which we presume to be a typo.] ℐ̃_qq(b⃗_⊥',x,μ) = δ(1-x) + α_s C_F/2 π[1/2a(1+a)(L_b')^2 δ(1-x) + 1/1+a L_b' (-2 ℒ_0(1-x)+x+1) + 4a/1+aℒ_1(1-x) + 1-a/aπ^2/12δ(1-x) - 2a/1+a(1+x) ln(1-x) -2a/1+a1+x^2/1-xln x -x+1] +𝒪(α_s^2) , ℐ̃_qg(b⃗_⊥',x,μ) = α_s T_F/2 π[-1/1+a L_b'(2x^2 - 2x +1) - (2x^2 - 2x +1)(-2a/1+a(ln(1-x)-ln x) + 1 ) +1 ] +𝒪(α_s^2) . Here L_b' ≡ln[(b⃗_⊥')^2 μ^2+2a e^2_E/4] = L_b - 2a ln(p^-/μ) , with p^- = Q e^± Y depending on the beam. §.§ Renormalization group evolution The one-loop anomalous dimensions of the hard, beam and soft function are γ_H^(1) = α_s C_F/π[2 ln(Q^2/μ^2) -3 ] , γ_S^(1) = α_s C_F/π( -2/a L_b ) , γ_B^(1) = α_s C_F/π(1/aL_b'+3/2) . As required by consistency of the factorization, _H + _S + 2 _B = 0 , where we use that p_1^- p_2^+ = Q e^Y Q e^-Y = Q^2. To achieve NLL' accuracy, we need to include the two-loop cusp anomalous dimension, Γ_1, given in gabeta. We carry out the resummation by evolving the hard and soft function to beam scale. This requires solving the differential equation /lnμ F = γ_F F , which in general involves a convolution between _F and F, but is multiplicative for the Fourier conjugate variables b⃗_⊥ and b⃗_⊥'. The evolution kernels for evolving the hard and soft function from a scale μ_0 to μ are given by: U_H(Q^2,μ_0,μ) = exp[-4 K_Γ + 6 C_F/_0ln r ](Q^2/μ_0^2)^2η_ , U_S(b⃗_⊥, μ_0, μ, a) = exp[-4/a K_Γ](4/b⃗_⊥^2 μ_0^2 e^2_E)^2η_/a , K_ = - _0/4β_0^2[ 4π/α_s(μ_0)(1 - 1/r - ln r) + (_1/_0 - β_1/β_0)(1-r+ln r) + β_1/2β_0ln^2 r ] , η_ = - _0/2 β_0[ln r + α_s(μ_0)/4π( _1/_0 - β_1/β_0) (r-1) ] . where r = α_s(μ)/α_s(μ_0) and _0 = 4C_F , _1 = 4 C_F [(67/9-π^2/3) C_A - 20/9 T_F n_f] , β_0 = 11/3 C_A - 4/3 T_F n_f , β_1 = 34/3 C_A^2 - (20/3 C_A + 4 C_F) T_F n_f .
http://arxiv.org/abs/2307.01180v1
20230703174501
PlanE: Representation Learning over Planar Graphs
[ "Radoslav Dimitrov", "Zeyang Zhao", "Ralph Abboud", "İsmail İlkan Ceylan" ]
cs.LG
[ "cs.LG", "cs.AI" ]
Fitting an ellipsoid to a quadratic number of random points Afonso S. Bandeira, Antoine Maillard, Shahar Mendelson, Elliot Paquette August 1, 2023 =========================================================================== Graph neural networks are prominent models for representation learning over graphs, where the idea is to iteratively compute representations of nodes of an input graph through a series of transformations in such a way that the learned graph function is isomorphism invariant on graphs, which makes the learned representations graph invariants. On the other hand, it is well-known that graph invariants learned by these class of models are incomplete: there are pairs of non-isomorphic graphs which cannot be distinguished by standard graph neural networks. This is unsurprising given the computational difficulty of graph isomorphism testing on general graphs, but the situation begs to differ for special graph classes, for which efficient graph isomorphism testing algorithms are known, such as planar graphs. The goal of this work is to design architectures for efficiently learning complete invariants of planar graphs. Inspired by the classical planar graph isomorphism algorithm of Hopcroft and Tarjan, we propose as a framework for planar representation learning. includes architectures which can learn complete invariants over planar graphs while remaining practically scalable. We empirically validate the strong performance of the resulting model architectures on well-known planar graph benchmarks, achieving multiple state-of-the-art results. § INTRODUCTION Graphs are standard for representing relational data in a wide range of domains, including physical <cit.>, chemical <cit.>, and biological <cit.> systems, which led to increasing interest in machine learning over graphs. Graph neural networks (GNNs) <cit.> have become prominent for graph machine learning for a wide range of tasks, owing to their capacity to explicitly encode desirable relational inductive biases <cit.>. GNNs iteratively compute representations of nodes of an input graph through a series of transformations in such a way that the learned graph-level function represents a graph invariant: a property of graphs which is preserved under all isomorphic transformations. r0.35 [scale=.75, vertex/.style = draw,fill=black,circle,inner sep=1pt, minimum height= 1.5mm] [vertex] (v0) at (0, 0) ; [vertex] (v1) at (0, 1) ; [vertex] (v2) at (1, 0) ; [vertex] (v3) at (1.3, 0.3) ; [vertex] (v4) at (0.3, 1.3) ; [vertex] (v5) at (1.3, 1.3) ; [thick] (v0) edge (v1) (v1) edge (v2) (v2) edge (v0) ; [thick] (v3) edge (v4) (v4) edge (v5) (v5) edge (v3) ; at (0.65, 1.9) G_1; [xshift=2.5cm] [vertex] (v0) at (2, 1.3) ; [vertex] (v1) at (1, 1.3) ; [vertex] (v2) at (0, 1.3) ; [vertex] (v3) at (0, 0) ; [vertex] (v4) at (1, 0) ; [vertex] (v5) at (2, 0) ; [thick] (v0) edge (v1) (v1) edge (v2) (v2) edge (v3) (v3) edge (v4) (v4) edge (v5) (v5) edge (v0); at (1,1.9) G_2; The graphs G_1 and G_2 are indistinguishable by 1-WL. Learning functions on graphs is challenging for various reasons, particularly since the learning problem contains the infamous graph isomorphism problem, for which the best known algorithm, given in a breakthrough result by <cit.>, runs in quasi-polynomial time. A large class of GNNs can therefore only learn incomplete graph invariants for scalablity purposes. In fact, standard GNNs are known to be at most as expressive as the 1-dimensional Weisfeiler-Leman algorithm (1-WL)<cit.> in terms of distinguishing power <cit.>. There are simple non-isomorphic pairs of graphs, such as the pair shown in <Ref>, which cannot be distinguished by 1-WL and by a large class of GNNs. This limitation motivated a large body of work aiming to explain and overcome the expressiveness barrier of these architectures <cit.>. l0.35 0.8 [:75]R-C(=[::+60]O)-[::-60]O-[::-60]C(=[::+60]O)-[::-60]R Acid anhydride as a planar graph with node and edge types. The expressiveness limitations of standard GNNs already apply on planar graphs, since, e.g., the graphs G_1 and G_2 from <Ref> are planar. There are, however, efficient and complete graph isomorphism testing algorithms for planar graphs <cit.>, which motivates an aligned design of dedicated architectures with better properties over planar graphs. Building on this idea, this paper proposes architectures for efficiently learning complete invariants over planar graphs. Planar graphs are very prominent structures, and appear, e.g., in road networks <cit.>, circuit design <cit.>, and most importantly, in biochemistry <cit.>. Molecules are commonly encoded as planar graphs with node and edge types, as shown in <Ref>, which enables the application of many graph machine learning architectures on biochemistry tasks. For example, all molecule datasets in OGBG <cit.> fully consist of planar graphs. Biochemistry has been a very important application domain for both classical graph isomorphism testing[Graph isomorphism first appears in the chemical documentation literature <cit.>, as the problem of matching a molecular graph against a database of such graphs; see, e.g., <cit.> for a recent survey.] and for graph machine learning. Therefore, by designing architectures learning complete invariants over planar graphs, our work unlocks the full potential of both these domains. The contributions of this work can be summarized as follows: * Building on the classical literature of planar graph isomorphism testing, we introduce as a framework for learning isomorphism-complete invariant functions over planar graphs. * We derive a model architecture, , and prove that it is a complete learning algorithm on planar graphs: can distinguish any pair of non-isomorphic planar graphs. * We use an existing synthetic dataset, EXP <cit.>, and experimentally verify the expressive power of . Moreover, we design a graph regression task, where the goal is to predict the (normalized) graph clustering on a subset of the molecular dataset QM9 <cit.>. In strong contrast to classical architectures, can predict the clustering coefficients almost perfectly. * We experiment on real-world datasets and obtain multiple state-of-the-art results on molecular datasets, suggesting the strength of . The full technical details of the algorithms, and proofs are delegated to the appendix of this paper. § A PRIMER ON GRAPHS, INVARIANTS, AND GRAPH NEURAL NETWORKS Graphs and connectivity. Consider simple, undirected graphs G=(V,E,ζ), where V is a set of nodes, E⊆ V × V is a set of edges, and ζ:V→ is a (coloring) function. If the range of this map is =^d, we refer to it as a d-dimensional feature map. A graph is connected if there is a path between any two nodes and disconnected otherwise. A graph is biconnected if it cannot become disconnected by removing any single node. A graph is triconnected if the graph cannot become disconnected by removing any two nodes. A graph is planar if it can be drawn on a plane such that no two edges intersect. Components. A biconnected component of a graph G is a maximal biconnected subgraph. Any connected graph G decomposes into a tree of biconnected components called the Block-Cut tree of the graph, which we denote as (G). The blocks are attached to each other at shared nodes called cut nodes or articulation points. A triconnected component of a graph G is a maximal triconnected subgraph. Triconnected components of a graph G can also be represented in terms of a tree, known as the SPQR tree, which we denote as (G). It is useful to refer to components of a graph in terms of a set. Given a graph G, we denote by σ^G the set of all components induced from the nodes of (G), and by π^G the set of all biconnected components of G. Moreover, we denote by σ_u^G the set of all SQPR components of G where u appears as a node, and by π_u^G the set of all biconnected components of G where u appears as a node. Labeled trees. We sometimes refer to rooted, undirected, labeled trees Γ = (V, E, ζ), where the canonical root node is given as one of tree's centroids: a node with the property that none of its branches contains more than half of the other nodes. We denote by (Γ) the canonical root of Γ, and define the depth d_u of a node u in the tree as the node's minimal distance from the canonical root. The children of a node u is the set χ(u)={v|(u,v) ∈ E, d_v=d_u+1}. The descendants of a node u is given by set of all nodes reachable from u through a path of length k≥ 0 such that the node at position j+1 has one more depth than the node at position j, for every 0≤ j ≤ k. Given a rooted tree Γ and a node u, the subtree of Γ rooted at node u is the tree induced by the descendants of u, which we donote by Γ_u. For technical convenience, we allow the induced subtree Γ_u of a node u even if u does not appear in the tree Γ, in which case Γ_u is the empty tree. Node and graph invariants. An isomorphism from a graph G=(V,E,ζ) to a graph G'=(V',E',ζ') is a bijection f:V→ V' such that ζ(u)=ζ'(f(u)) for all u∈ V, and (u,v)∈ E if and only if (f(u),f(v))∈ E', for all u,v∈ V. A node invariant is a function ξ that associates with each graph G=(V,E,ζ) a function ξ(G) defined on V such that for all graphs G and G', all isomorphisms f from G to G', and all nodes u ∈ V, it holds that ξ(G)(u)=ξ(G')(f(u)). A graph invariant is a function ξ defined on graphs such that ξ(G)=ξ(G') for all isomorphic graphs G and G'. We can derive a graph invariant ξ from a node invariant ξ' by mapping each graph G to the multiset {{ξ'(G)(u) | u ∈ V}}. We say that a graph invariant ξ distinguishes two graphs G and G' if ξ(G)≠ξ(G'). If a graph invariant ξ distinguishes G and G' then there is no isomorphism from G to G'. If the converse also holds, then ξ is a complete graph invariant. We can speak of (in)completeness of invariants on special classes of graphs, e.g., the 1-WL computes an incomplete invariant on general graphs, but it is well-known to compute a complete invariant on trees <cit.>. Message passing neural networks. A vast majority of graph neural networks are instances of message passing neural networks (MPNNs) <cit.>. Given an input graph G=(V,E,ζ), an MPNN sets, for each node u ∈ V, an initial node representation ζ(u)=_u^(0), and iteratively computes representations ^(ℓ)_u for a fixed number of layers 0 ≤ℓ≤ L as: _u^(ℓ+1)ϕ(_u^(ℓ), ψ(_u^(ℓ), {{_v^(ℓ)|  v ∈ N_u}}) ), where ϕ and ψ denote the update and aggregation functions, respectively, and N_u denotes the neighborhood of node u. Node representations can be pooled to obtain graph-level embeddings by, e.g., summing all node embeddings. We denote by ^(L) the resulting graph-level embeddings. In this case, an MPNN can be viewed as an encoder that maps each graph G to a representation ^(L), computing a graph invariant. § RELATED WORK The expressive power of MPNNs is upper bounded by 1-WL <cit.> in terms of distinguishing graphs, and by the logic 𝖢^2 in terms of capturing functions over graphs <cit.>. Therefore, substantial research has been conducted to improve on these bounds. One notable direction has been to enrich node features, most prominently with unique node identifiers <cit.>, random discrete colors <cit.>, and even random noisy dimensions <cit.>. Another line of work proposes higher-order architectures <cit.> based on higher-order tensors <cit.>, or a higher-order form of message passing <cit.>, which typically align with a k-dimensional variant of the WL test, for some k>1. Higher-order architectures, however, are generally not scalable, and most existing models are therefore upper bounded by 2-WL (or, oblivious 3-WL); see <cit.> for details of the k-WL hierarchy and its correspondence with higher-order GNNs. Another body of work is based on sub-graph sampling  <cit.>, with pre-set sub-graphs used within model computations. These approaches can yield substantial expressiveness improvements, e.g., CWN <cit.> is more powerful than 2-WL (or oblivious 3-WL), but these improvements typically rely on manual sub-graph selection, and require running expensive pre-computations. Finally, MPNNs have been extended to incorporate other graph kernels, namely shortest paths <cit.>, random walks <cit.> and nested color refinement <cit.>. The bottleneck limiting the expressiveness of MPNNs is the implicit need to perform graph isomorphism checking, which is challenging in the general case. However, there are well-known classes of graphs, such as planar graphs, with efficient and complete isomorphism algorithms, thus eliminating this bottleneck. For planar graphs, the first complete algorithm for isomorphism testing was presented by <cit.>, and was followed up by a series of algorithms <cit.>. <cit.> presented an algorithm, which we refer to as , that is more suitable for practical applications, and that we align with in our work. As a result of this alignment, our approach is the first efficient and complete learning algorithm on planar graphs. The only other complete models on planar graphs are higher-order GNNs with 3-WL (or, oblivious 4-WL) power, since 3-WL is complete on planar graphs <cit.>. However, these models are not scalable in practice, since a k-WL algorithm computes a k-stable coloring of an n-node graph in time O(k^2 n^k+1log n) <cit.>. By contrast, our architecture learns representations of efficiently computed components and uses these to obtain refined representations. Our approach extends the inductive biases of MPNNs and introduces structures recently noted to be beneficial in the literature <cit.>. § A PRACTICAL PLANAR ISOMORPHISM ALGORITHM We consider simple and undirected planar graphs. The idea behind the algorithm is to compute a canonical code for planar graphs, allowing us to reduce the problem of isomorphism testing to checking whether the codes of the respective graphs are equal. A code can be thought of as a string over Σ∪, where for each graph, the algorithm computes codes for various components resulting from decompositions, and gradually builds a code representation for the graph. For readability, we allow reserved symbols "(", ")" and "," in the generated codes. We present an overview of and refer to <cit.> for details. We can assume that the planar graphs are connected, as the algorithm can be extended to disconnected graphs by independently computing the codes for each of the components, and then concatenating them in their lexicographical order. Generating a code for the the graph. Given a connected planar graph G = (V, E, ζ), decomposes G into a Block-Cut tree δ = (G). Every node u ∈δ is either a cut node of G, or a virtual node associated with a biconnected component of G. iteratively removes the leave nodes of δ as follows: if u is a leaf node associated with a biconnected component B, uses a subprocedure to compute the canonical code (δ_u)=(B) and removes u from the tree; otherwise, if the leaf node u is a cut node, overrides the initial code (({u}, {})), using an aggregate code of the removed biconnected components in which u occurs as a node. This procedure continues until there is a single node left, and the code for this remaining node is taken as the code of the entire connected graph. This process yields a complete graph invariant. Conceptually, it is more convenient for our purposes to reformulate this procedure as follows: we first canonically root δ, and then code the subtrees in a bottom-up procedure. Specifically, we iteratively generate codes for subtrees and the final code for G is then simply the code of δ_(δ). Generating a code for biconnected components. relies on a subprocedure to compute a code, given a biconnected planar graph B. Specifically, it uses the SPQR tree γ = (B) which uniquely decomposes a biconnected component into a tree with virtual nodes of one of four types: S for cycle graphs, P for two-node dipole graphs, Q for a graph that has a single edge, and finally R for a triconnected component that is not a dipole or a cycle. We first generate codes for the induced sub-graphs based on these virtual nodes. Then the SPQR tree is canonically rooted, and similarly to the procedure on the Block-Cut tree, we iteratively build codes for the subtrees of γ in a bottom-up fashion. Due to the simpler structure of the SPQR tree, instead of making overrides, the recursive code generation is done by prepending a number θ(C,C') for a parent SPQR tree node C and each C' ∈χ(C). This number is generated based on the way C and C' are connected in B. Generating codes for the virtual P and Q nodes is trivial. For S nodes, we use the lexicographically smallest ordering of the cycle, and concatenate the individual node codes. However, for R nodes we require a more complex procedure and this is done using Weinberg's algorithm as a subroutine. Generating a code for triconnected components. <cit.> has shown that triconnected graphs have only two planar embeddings, and introduced an algorithm that computes a canonical code for triconnected planar graphs <cit.> which we call . This code is used as one of the building blocks of the algorithm and can be extended to labeled triconnected planar graphs, which is essential for our purposes. Weinberg's algorithm <cit.> generates a canonical order for a triconnected component T, by traversing all the edges in both directions via a walk. Let ω be the sequence of visited nodes in this particular walk and write ω[i] to denote i-th node in it. This walk is then used to generate a sequence κ of same length, that corresponds to the order in which we first visit the nodes: for each node u = ω[i] that occurs in the walk, we set κ[i] = 1 + |{κ[j] | j < i}| if ω[i] is the first occurrence of u, or κ[i] = κ[min{j |ω[i] = ω[j]}] otherwise. For example, the walk ω = ⟨ v_1, v_3, v_2, v_3, v_1⟩ yields κ = ⟨ 1, 2, 3, 2, 1 ⟩. Given such a walk of length k and a corresponding sequence κ, we compute the following canonical code: (T)= (κ[1],(({ω[1]}, {}))), …, (κ[k],(({ω[k]},{})) ). § : REPRESENTATION LEARNING OVER PLANAR GRAPHS generates a unique code for every planar graph in a hierarchical manner based on the decompositions. Our framework aligns with this algorithm: we learn representations for nodes, the respective components, and the graph. Given a planar graph G = (V, E, ζ), sets the initial node representations _u^(0) = ζ(u) for each node u ∈ V, and computes, for every layer 1 ≤ℓ≤ L, the representations _C^(ℓ) of components of σ^G, the representations _B^(ℓ) of biconnected components B ∈π^G, the representations _δ_u^(ℓ) of subtrees in the Block-Cut tree δ=(G) for each node u ∈ V, and the representations _u^(ℓ) of nodes as: _C^(ℓ) = (C, {(_u^(ℓ-1),u) |  u ∈ C}) _B^(ℓ) = (B, { (_C^(ℓ),C) |  C ∈σ^B}) _δ_u^(ℓ) = (δ_u, {(_v^(ℓ-1),v) |  v ∈ V}, { (_B^(ℓ),B)| B ∈π^G}) _u^(ℓ) = ϕ(_u^(ℓ-1), {{_v^(ℓ-1) | v ∈ N_u }}, {{_v^(ℓ-1) |  v ∈ V}}, {{_T^(ℓ) |  T ∈σ_u^G }}, {{_B^(ℓ) |  B ∈π_u^G }}, _δ_u^(ℓ)) where , , and are invariant encoders; and ϕ is an function. The encoders and are parallel to the code generation procedures and of the algorithm. Since operates by learning node representations, we further simplify the final graph code generation, by learning embeddings for the cut nodes, which is implemented by the encoder. For graph-level tasks, we apply a pooling function on final node embeddings, mapping the multiset of final node embeddings to a graph-level embedding which is denoted as _G^(L). There are many choices for deriving architectures, but we propose a simple model, , to clearly identify the virtue of the model architecture which aligns with KHC, as follows: . Given a component C and the previous node representations _u^(ℓ-1) of each node u, encodes C based on the walk ω given by Weinberg's algorithm, and its corresponding sequence κ as: _C^(ℓ) = ( ∑_i=1^|ω|( ^(ℓ-1)_ω[i]_κ[i]_i ) ), where _x ∈ℝ^d is the positional embedding <cit.>. This is a simple sequence model with a positional encoding on the walk, and a second one based on the generated sequence κ. Edge features can also be concatenated while respecting the edge order given by the walk. The nodes of (G) are one of the types S, P, Q, R, where for S, P, Q types, we have a trivial ordering for the induced components, and Weinberg's algorithm also gives an ordering for R nodes that correspond to triconnected components. . Given a biconnected component B and the representations _C^(ℓ) of each component induced by a node C in γ =(B), uses the SPQR tree and the integers θ(C,C') corresponding to how we connect C and C' ∈χ(C). then computes a representation for each subtree γ_C induced by a node C in a bottom up fashion as: _γ_C^(ℓ) = ( _C^(ℓ) + ∑_C' ∈χ(C)( _γ_C'^(ℓ)_θ(C,C')) ). This encoder operates in a bottom up fashion to ensure that a subtree representation of the children of C exists before it encodes the subtree γ_C. The representation of the canonical root node in γ is used as the representation of the biconnected component B by setting: _B^(ℓ) = _γ_(γ). . Given a subtree δ_u of a Block-Cut tree δ, the representations _B^(ℓ) of each biconnected component B, and the node representations ^(ℓ-1)_u of each node, sets _δ_u^(ℓ)=0^d(ℓ) if u is not a cut node; otherwise, it computes the subtree representations as: _δ_u^(ℓ) = ( _u^(ℓ-1) + ∑_B ∈χ(u)( _B^(ℓ) + ∑_v ∈χ(B)_δ_v^(ℓ)) ). The procedure is called in a bottom-up order to ensure that the representations of the grandchildren are already computed. We learn the cut node subtree representations instead of employing the hierarchical overrides that are present in the algorithm, as the latter is not ideal in a learning algorithm. However, with sufficient layers, these representations are complete invariants. . Putting these altogether, we update the node representations _u^(ℓ) of each node u as: _u^(ℓ) = f^(ℓ)( g_1^(ℓ)(_u^(ℓ-1) + ∑_v ∈ N_u_v^(ℓ-1))   g_2^(ℓ)∑_v ∈ V_v^(ℓ-1)  g_3^(ℓ)(_u^(ℓ-1) + ∑_B ∈π_u^G_B^(ℓ))   g_4^(ℓ)(_u^(ℓ-1) + ∑_C ∈σ_u^G_C^(ℓ))   _δ_u^(ℓ)), where f^(ℓ) and g_i^(ℓ) are either linear maps or two-layer MLPs. To obtain a graph-level representation, we pool as follows: _G = ( _ℓ=1^L( ∑_u ∈ V^G_u^(ℓ)) ). § EXPRESSIVE POWER OF We present the theoretical result of this paper, which states that can distinguish any pair of planar graphs, even when using only a logarithmic number of layers in the size of the input graphs: For any planar graphs G_1 = (V_1, E_1, ζ_1) and G_2 = (V_2, E_2, ζ_2), there exists a parametrization of with at most L = ⌈log_2(max{|V_1|,|V_2|})⌉ + 1 layers, which computes a complete graph invariant, that is, the final graph-level embeddings satisfy _G_1^(L)≠_G_2^(L) if and only if G_1 and G_2 are not isomorphic. The construction is non-uniform, since the number of layers needed depends on the size of the input graphs. In this respect, our result is similar to other results aligning GNNs with 1-WL with sufficiently many layers <cit.>. There are, however, two key differences: (i) computes isomorphism-complete invariants over planar graphs and (ii) our construction requires only logarithmic number of layers in the size of the input graphs (as opposed to linear). The theorem builds on the properties of each encoder being complete. We first show that a single application of and is sufficient to encode all relevant components of an input graph in an isomorphism-complete way: Let G=(V, E, ζ) be a planar graph. Then, for any biconnected components B, B' of G, and for any SPQR components C and C' of G, there exists a parametrization of the functions and such that: * _B≠_B' if and only if B and B' are not isomorphic, and * _C≠_C' if and only if C and C' are not isomorphic. Intuitively, this result follows from a natural alignment to the procedures of the algorithm: the existence of unique codes for different components is proven for the algorithm and we lift this result to the embeddings of the respective graphs, using the universality of MLPs <cit.>. Our main result rests on a key result related to , which states that computes complete graph invariants for all subtrees of the Block-Cut tree. We use an inductive proof, where the logarithmic bound stems from a single layer computing complete invariants for all subtrees induced by cut nodes that have at most one grandchild cut node, the induced subtree of which is incomplete. For a planar graph G = (V, E, ζ) of order n and its associated Block-Cut tree δ = (G), there exists a L = ⌈log_2(n) ⌉ + 1 layer parametrization of that computes a complete graph invariant for each subtree δ_u induced by each cut node u. With <Ref> and <Ref> in place, <Ref> follows from the fact that every biconnected component and every cut node of the graph are encoded in an isomorphism-complete way, which is sufficient for distinguishing planar graphs. § EXPERIMENTAL EVALUATION In this section, we evaluate in three different settings. First, we conduct a synthetic experiment to study the structural inductive bias of . Second, we evaluate a variant using edge features on the MolHIV graph classification task from OGB <cit.>. Finally, we evaluate this variant on graph regression over ZINC <cit.> and QM9 <cit.>. We provide hyperparameter grids and training setup, as well as a study of 's expressive power using EXP <cit.>, and an ablation study on ZINC in the appendix. §.§ Clustering coefficient of QM9 graphs In this experiment, we evaluate the ability of to detect structural graph signals without an explicit reference to the target structure. To this end, we propose a simple, yet challenging, synthetic task: given a subset of graphs from QM9, we aim to predict the graph-level clustering coefficient (CC). Computing CC requires counting triangles in the graph, which is impossible to solve with standard MPNNs <cit.>. Moreover, triangles are not explicitly extracted in the original planarity algorithm that underpins , and thus the model must learn to detect triangles. r5.5cm Normalized clustering coefficient distribution of . Data setup. We select a subset of graphs from QM9, which we call , to obtain a diverse distribution of CCs. As most QM9 graphs have a CC of 0, we consider graphs with a CC in the interval [0.06, 0.16], as this range has high variability. We then normalize the CCs to the unit interval [0, 1]. We apply the earlier filtering on the original QM9 splits to obtain train/validation/test sets that are direct subsets of the full QM9 splits, and which consist of 44226, 3941 and 3921 graphs, respectively. The overall label distribution in this dataset is depicted in <Ref>. Experimental setup. Given the small size of QM9 and the locality of triangle counting, we use 32-dimensional node embeddings and 3 layers across all models. Moreover, we use a common 100 epoch training setup for fairness. For evaluation, we report mean absolute error (MAE) on the test set, averaged across 5 runs. For this experiment, our baselines are (i) an input-agnostic constant prediction that returns a minimal test MAE, (ii) the MPNNs GCNs <cit.> and GIN <cit.>, (iii) ESAN <cit.>, an MPNN that computes sub-structures through node and edge removals, but which does not explicitly extract triangles, and (iv) , using 16-dimensional positional encoding vectors. Results. Results on are provided in <Ref>. comfortably outperforms standard MPNNs. Indeed, GCN performance is only marginally better than the constant baseline and GIN's MAE is over an order of magnitude behind . This is a very substantial gap, and confirms that MPNNs are unable to accurately detect triangles to compute CCs. r0.35 MAE of and baselines on the dataset. Model MAE Constant 0.1627 ± 0.0000 GCN 0.1275 ± 0.0012 GIN 0.0612 ± 0.0018 ESAN 0.0038 ± 0.0010 0.0023 ± 0.0004 Moreover, achieves an MAE over 40% lower than ESAN. Overall, effectively detects triangle structures, despite these not being explicitly provided, and thus its underlying algorithmic decomposition effectively captures latent structural graph properties in this setting. Performance analysis. To better understand our results, we visualize the predictions of , GIN, and GCN using scatter plots in <Ref>. As expected, essentially follows the ideal regression line. By contrast, GIN and GCN are much less stable. Indeed, GIN struggles with CCs at the extremes of the [0, 1] range, but is better at intermediate values, whereas GCN is consistently unreliable, and rarely returns predictions above 0.7. This highlights the structural limitation of GCNs, namely its self-loop mechanism for representation updates, which causes ambiguity for detecting triangles. §.§ Graph classification on MolHIV r0.35 ROC-AUC of and baselines on MolHIV. GCN <cit.> 75.58 ±0.97 GIN <cit.> 77.07 ±1.40 PNA <cit.> 79.05 ±1.32 ESAN <cit.> 78.00 ± 1.42 GSN <cit.> 80.39±0.90 CIN <cit.> 80.94±0.57 HIMP <cit.> 78.80 ± 0.82 80.04 ± 0.50 Model setup. We use a variant that uses edge features, called (defined in the appendix), on OGB <cit.> MolHIV and compare against baselines. We instantiate with an embedding dimension of 64, 16-dimensional positional encodings, and report the average ROC-AUC across 10 independent runs. Results. The results for and other baselines on MolHIV are shown in <Ref>: Despite not explicitly extracting relevant cycles and/or molecular sub-structures, outperforms standard MPNNs and the domain-specific HIMP model. It is also competitive with substructure-aware models CIN and GSN, which include dedicated structures for inference. Therefore, performs strongly in practice with minimal design effort, and effectively uses its structural inductive bias to remain competitive with dedicated architectures. §.§ Graph regression on QM9 Experimental setup. We map QM9 <cit.> edge types into features by defining a learnable embedding per edge type, and subsequently apply to the dataset. We evaluate on all 13 QM9 properties following the same splits and protocol (with MAE results averaged over 5 test set reruns) of GNN-FiLM <cit.>. We compare R-SPN against GNN-FiLM models and their fully adjacent (FA) variants <cit.>, as well as shortest path networks (SPNs) <cit.>. We report results with an 3-layer using 128-dimensional node embeddings and 32-dimensional positional encodings. Results. results on QM9 are provided in <Ref>. In this table, outperforms high-hop SPNs, despite being simpler and more efficient, achieving state-of-the-art results on 9 of the 13 tasks. The gains are particularly prominent on the first five properties, where R-SPNs originally provided relatively little improvement over FA models, suggesting that offers complementary structural advantages to SPNs. This was corroborated in our experimental tuning: performance peaks around 3 layers, whereas SPN performance continues to improve up to 8 (and potentially more) layers, which suggests that is more efficient at directly communicating information, making further message passing redundant. Overall, maintains the performance levels of R-SPN with a smaller computational footprint. Indeed, messages for component representations efficiently propagate over trees in , and the number of added components is small (see appendix for more details). Therefore and the framework offer a more scalable alternative to explicit higher-hop neighborhood message passing over planar graphs. §.§ Graph regression on ZINC r0.60 MAE of (E-)and baselines on ZINC. ZINC(12k) ZINC(12k) ZINC(Full) Edge Features No Yes Yes GCN <cit.> 0.278 ±0.003 - - GIN(-E) <cit.> 0.387 ±0.015 0.252 ±0.014 0.088 ±0.002 PNA <cit.> 0.320 ±0.032 0.188 ±0.004 0.320 ±0.032 GSN <cit.> 0.140 ±0.006 0.101 ±0.010 - CIN <cit.> 0.115±0.003 0.079±0.006 0.022±0.002 ESAN <cit.> - 0.102 ±0.003 - HIMP <cit.> - 0.151 ±0.006 0.036 ±0.002 (E-) 0.124± 0.004 0.076±0.003 0.028±0.002 Experimental setup. We (i) evaluate on the ZINC subset (12k graphs) without edge features, (ii) evaluate on this subset and on the full ZINC dataset (500k graphs). To this end, we run and with 64 and 128-dimensional embeddings, 16-dimensional positional embeddings, and 3 layers. For evaluation, we compute MAE on the respective test sets, and report the best average of 10 runs across all experiments. Results. Results on ZINC are shown in <Ref>: Both and perform strongly, with achieving state-of-the-art performance on ZINC12k with edge features and both models outperforming all but one baseline in the other two settings. These results are very promising, and highlight the robustness of (E-). § LIMITATIONS, DISCUSSIONS, AND OUTLOOK Overall, both and perform strongly across all our experimental evaluation tasks, despite competing against specialized models in each setting. Moreover, both models belong to the framework, and thus are isomorphism-complete over planar graphs. This implies that these models benefit substantially from the structural inductive bias and expressiveness of classical planar algorithms, which in turn makes them a reliable, efficient, and robust solution for graph representation learning over planar graphs. Though the framework is a highly effective and easy to use solution for planar graph representation learning, it is currently limited to planar graphs. Indeed, the classical algorithms underpinning do not naturally extend beyond the planar graph setting, which in turn limits the applicability of the approach. Thus, a very important avenue for future work is to explore alternative (potentially incomplete) graph decompositions that strike a balance between structural inductive bias, efficiency and expressiveness on more general classes of graphs. The authors would like to acknowledge the use of the University of Oxford Advanced Research Computing (ARC) facility in carrying out this work. (http://dx.doi.org/10.5281/zenodo.22558) plainnat § RUNTIME ANALYSIS OF In this section, we study the complexity of the model. To this end, we analyze the complexity of the decomposition computation, the sizes of the resulting components, and finally the time to run on these components. Computing components of a planar graph. Given an input graph G=(V,E, ζ), first computes the SPQR components/SPQR tree, and identifies cut nodes. For this step, we follow a simple O(|V|^2) procedure, analogously to the computation in the algorithm. Note that this computation only needs to run once, as a pre-processing step. Therefore, this pre-computation does not ultimately affect runtime for model predictions. Size and number of computed components. * Cut nodes: The number of cut nodes in G is at most |V|, and this corresponds to the worst-case when G is a tree. * Biconnected Components: The number of biconnected components |π^G| is at most |V|-1, also obtained when G is a tree. This setting also yields a worst-case bound of 2|V|-2 on the total number of nodes across all individual biconnected components. * SQPR components: As proved by <cit.>, given a biconnected component B, the number of corresponding SPQR components, as well as their size, is bounded by the number of nodes in B. The input graph may have multiple biconnected components, whose total size is bounded by 2|V|-2 as described earlier. Thus, we can apply the earlier result from <cit.> to obtain an analogous bound of 2|V|-2 on the total number of SPQR components. * SPQR trees: As each SPQR component corresponds to exactly one node in a SPQR tree. The total size of all SPQR trees is upper-bounded by 2|V|-2. Complexity of . computes a representation for each SQPR component edge. This number of edges, which we denote by e_C, is in fact linear in V: Indeed, following Euler's theorem for planar graphs (|E| ≤ 3|V| - 6), the number of edges per SQPR component is linear in its size. Moreover, since the total number of nodes across all SQPR components is upper-bounded by 2|V|-2, the total number of edges is in O(V). Therefore, as each edge representation can be computed in a constant time with an MLP, computes all edge representations across all SQPR components in O(e_C· d)= O(|V|d^2) time, where d denotes the embedding dimension of the model. Using the earlier edge representations, performs an aggregation into a triconnected component representation by summing the relevant edge representations, and this is done in O(e_C d). Then, the sum outputs across all SPQR components are transformed using an MLP, in O(|sigma^G| d^2). Therefore, the overall complexity of is O((e_c + |σ^G|)d^2) = O(|V| d^2). Complexity of . recursively encodes nodes in the SPQR tree. Each SPQR node is aggregated once by its parent and an MLP is applied to each node representation once. The total complexity of this call is therefore O(|σ^G|d^2) = O(|V|d^2). Complexity of . A similar complexity of O(|V|d^2) applies for , as follows a similar computational pipeline. Complexity of node update. As in standard MLPs, message aggregation runs in O(|E|d), and the combine function runs in O(|V|d^2). Global readouts can be computed in O(|V|d). Aggregating from bi-connected components also involves at most O(|V|d) messages, as the number of messages corresponds to the total number of bi-connected component nodes, which itself does not exceed 2|V|-2. The same argument applies to messages from SQPR components: nodes within these components will message their original graph analogs, leading to the same bound. Finally, the cut node messages are trivially linear, i.e., O(|V|), as each node receives exactly one message (no aggregation, no transformation). Overall, this leads to the step computation having a complexity of O(|V|d^2). Overall complexity runs in O(|V|d^2) time, as this is the asymptotic bound of each of its individual steps. The d^2 term primarily stems from MLP computations, and is not a main hurdle to scalability, as the used embedding dimensionality in our experiments is usually small. § PROOFS OF THE STATEMENTS In this section, we provide the proofs of the statements from the main paper. Throughout the proofs, we make the standard asssumption that the initial node features are from a compact space K ⊆^d, for some d∈^+. We also often need to canonically map elements of finite sets to the integer domain: given a finite set S, and any element x∈ S of this set, the function Ψ(S, x): S →{1, …, |S|} maps x to a unique integer index given by some fixed order over the set S. Furthermore, we also often need to injectively map sets into real numbers, which is given by the following lemma. Given a bounded set S, there exists an injective map g : (S) → [0, 1]. For each subset M ⊆ S, consider the mapping: g(M) = 1/1 + ∑_x ∈ MΨ(S, x) |S|^Ψ(M, x), which clearly satisfies g(M_1) ≠ g(M_2) for any M_1 ≠ M_2 ⊆ S. We now provide proofs for <Ref>, <Ref> which are essential for <Ref>. Let us first prove <Ref>: Lemma 6.2. Let G=(V, E, ζ) be a planar graph. Then, for any biconnected components B, B' of G, and for any SPQR components C and C' of G, there exists a parametrization of the functions and such that: * _B≠_B' if and only if B and B' are not isomorphic, and * _C≠_C' if and only if C and C' are not isomorphic. We show this by first giving a parametrization of to distinguish all SPQR components, and then use this construction to give a parametrization of to distinguish all biconnected components. The proof aligns the encoders with the code generation procedure in the algorithm. The parametrization needs only a single layer, so we will drop the superscripts in the node representations and write, e.g., _u, for brevity. The construction yields single-dimensional real-valued vectors, and, for notational convenience, we view the resulting representations as reals. Parametrizing . We initialize the node features as _u = ζ(u), where ζ:V→^d. Given an SPQR component C and the initial node representations _u of each node u, encodes C based on the walk ω given by Weinberg's algorithm, and its corresponding sequence κ as: _C = ( ∑_i=1^|ω|( _ω[i]_κ[i]_i ) ). In other words, we rely on the walks ω generated by Weinberg's algorithm: we create a walk on the triconnected components that visits each edge exactly twice. Let us fix n = 4(|V| + |E| + 1), which serves as an upper bound for size of the walks ω, and the size of the walk-induced sequence κ. We show a bijection between the multiset of codes ℱ (given by the algorithm), and the multiset of representations ℳ (given by ), where the respective multisets are defined, based on G, as follows: * ℱ = {{(C) | C ∈σ^G }}: the multiset of all codes of the SPQR components of G. * ℳ= {{_C | C ∈σ^G }}: the multiset of the representations of the SPQR components of G. Specifically, we prove the following: There exists a bijection ρ between ℱ and ℳ such that, for any SPQR component C: ρ(_C)= (C) and ρ^-1((C))= _C. Once <Ref> established, the desired result is immediate, since, using the bijection ρ, and the completeness of the codes generated by the algorithm, we obtain: _C ≠_C' ⇔ ρ^-1((C)) ≠ρ^-1((C')) ⇔ (C) ≠(C') ⇔ C and C' are non-isomorphic. To prove the <Ref>, first note that the canonical code given by Weinberg's algorithm for any component C has the form: (C)=(C)= (κ[1],(({ω[1]}, {}))), …, (κ[k],(({ω[k]},{})) ) = (κ[1],ζ(ω[1])), …, (κ[k],ζ(ω[k]) ). There is a trivial bijection between the codes of the form (<ref>) and sets of the following form: S_C={(ζ(ω[i]), κ[i], i ) | 1 ≤ i ≤ |ω| }, and, as a result, for each component C, we can refer to the set S_C that represents this component C instead of (C). Sets of this form are of bounded size (since the number of walks are bounded in C) and each of their elements is from a countable set (since the graph size is bounded). By <Ref> there exists an injective map g between such sets and the interval [0, 1]. Since the size of every such set is bounded, and every tuple is from a countable set, we can apply Lemma 5 of <cit.>, and decompose the function g as: g(S_C) = ϕ(∑_x ∈ S_C f(x) ), for an appropriate choice of ϕ : ^d'→ [0, 1] and f(x) ∈^d'. Based on the structure of the elements of S_C, we can further rewrite this as follows: g(S_C) = ϕ(∑_i=1^|ω| f ( (ζ(ω[i]), κ[i], i ) ) ). Observe that this function closely resembles (<ref>). More concretely, for any ω and i, we have that ζ(ω[i])=_ω[i], and, moreover, _κ[i] and _i are the positional encodings of κ[i], and i, respectively. Hence, it is easy to see that there exists a function μ, which satisfies, for every i: ((ζ(ω[i])), κ[i], i) = μ(_ω[i]_κ[i]_i). This implies the following: g(S_C) = ϕ(∑_i=1^|ω| f ( (ζ(ω[i]), κ[i], i )) ) = ϕ(∑_i=1^|ω| f ( μ(_ω[i]_κ[i]_i) ) ) =ϕ(∑_i=1^|ω| (f ∘μ) (_ω[i]_κ[i]_i) ). Observe that this function can be parametrized by : we apply the universal approximation theorem <cit.>, and encode (f ∘μ) with an MLP (i.e., the inner MLP) and similarly encode ϕ with another MLP (i.e., the outer MLP). This establishes a bijection ρ between ℱ and ℳ: for any SPQR component C, we can injectively map both the code (C) (or, equivalently the corresponding set S_C) and the representation _C to the same unique value using the function g as _C=g(S_C), and we have shown that there exists a parametrization of for this target function g. Parametrizing . In this case, we are given a biconnected component B and the representations _C of each component C from the SPQR tree γ =(B). We consider the representations _C which are a result of the parametrization of , described earlier. uses the SPQR tree and the integers θ(C,C') corresponding to how we connect C and C' ∈χ(C). then computes a representation for each subtree γ_C induced by a node C in a bottom up fashion as: _γ_C = ( _C + ∑_C' ∈χ(C)( _γ_C'_θ(C,C')) ). This encoder operates in a bottom up fashion to ensure that a subtree representation of the children of C exists before it encodes the subtree γ_C. The representation of the canonical root node in γ is used as the representation of the biconnected component B by setting: _B = _γ_(γ). To show the result, we first note that the canonical code given by the algorithm also operates in a bottom up fashion on the subtrees of γ. We have two cases: Case 1. For a leaf node C in γ, the code for γ_C is given by (γ_C) = (C). This case can be seen as a special case of Case 2 (and we will treat it as such). Case 2. For a non-leaf node C, we concatenate the codes of the subtrees induced by the children of C in their lexicographical order, by first prepending the integer given by θ to each child code. Then, we also prepend the code of the SPQR component C to this concatenation to get (γ_C). More precisely, if the lexicographical ordering of χ(u), based on (γ_C') for a given C' ∈χ(C) is x[1], …, x[|χ(C)|], then the code for γ_C is given by: (γ_C)= ( (C), (θ(C, x[1]), (γ_x[1])), …, (θ(C, x[|x|]), (γ_x[|x|]))) ) We show a bijection between the multiset of codes ℱ (given by the algorithm), and the multiset of representations ℳ (given by ), where the respective multisets are defined, based on G and the SPQR tree γ, as follows: * ℱ = {{(γ_C) | C ∈γ}}: the multiset of all codes of all the induced SPQR subtrees. * ℳ = {{_γ_C| C ∈γ}}: the multiset of the representations of all the induced SPQR subtrees. Analogously to the proof of , we prove the following claim: There exists a bijection ρ between ℱ and ℳ such that, for any node C in γ: ρ(_γ_C)=(γ_C) and ρ^-1((γ_C)) = _γ_C Given <Ref>, the result follows, since, using the bijection ρ, and the completeness of the codes generated by the algorithm, we obtain: _B ≠_B' ⇔ _γ_(B) ≠_γ_(B') ⇔ ρ^-1(((B))) ≠ρ^-1(((B'))) ⇔ ((B)) ≠((B')) ⇔ B and B' are non-isomorphic. To prove <Ref>, let us first consider how (γ_C) is generated. For any node C in γ, there is a bijection between the codes of the form given in <Ref> and sets of the following form: S_C = {{{(C) }}}∪{{ (θ(C, C'), (γ_C')) | C' ∈χ(C) }} Observe that the sets of this form are of bounded size (since the number of children is bounded) and each of their elements is from a countable set (since the graph size is bounded, the different codes that can be generated are also bounded). By <Ref> there exists an injective map g from such sets to the interval [0, 1]. Since the size of every such set is bounded, and every tuple is from a countable set, we can apply Lemma 5 of <cit.>, and decompose the function g as: g(S_C) = ϕ(∑_x ∈ S f(x) ), for an appropriate choice of ϕ : ^d'→ [0, 1] and f(x) ∈^d'. Based on the structure of the elements of S_C, we can further rewrite this as follows: g(S_C) = ϕ(f((C)) + ∑_C' ∈χ(C) f ( (θ(C, C'),(γ_C') ) ) ). Observe the connection between this function and (<ref>): for every C' ∈χ(C), we have _γ_C' instead of (γ_C), and, moreover, _θ(C, C') is a positional encoding of θ(C, C'). Then, there exists a function μ such that: (θ(C, C'), (γ_C')) = μ(_θ(C, C')_γ_C'), provided that the following condition is met: _γ_C' = (γ_C') for any C' ∈χ(C). Importantly, the choice for μ can be the same for all nodes C. Hence, assuming the condition specified in <Ref> is met, the function g can be further decomposed using some function f'(x) ∈^d' which satisfies: g(S_C) = ϕ(f({(C)}) + ∑_C' ∈χ(C) f ( (θ(C, C'),(γ_C') ) ) ) = ϕ( _C + ∑_C' ∈χ(C) f' ( μ( _θ(C, C')_γ_C') ) ) = ϕ(_C + ∑_C' ∈χ(C) (f' ∘μ) ( _θ(C, C')_γ_C') ). Observe that this function can be parametrized by [Note that f((C)) can be omitted, because has an outer MLP, which can incorporate f.] (<ref>): we apply the universal approximation theorem <cit.>, and encode (f' ∘μ) with an MLP (i.e., the inner MLP) and similarly encode ϕ with another MLP (i.e., the outer MLP). To conclude the proof of <Ref> (and thereby the proof of <Ref>), we need to show the existence of bijection ρ between ℱ and ℳ such that, for any node C in γ: ρ(_γ_C)=(γ_C) and ρ^-1((γ_C)) = _γ_C. This can be shown by a straight-forward induction on the structure of the tree γ. For the base case, it suffices to observe that C is a leaf node, which implies _γ_C = ϕ(_C) and (γ_C) = {(C)}. The existence of a bijection is then warranted by <Ref>. For the inductive case, assume that there is a bijection between the induced representations of the children of C and their codes to ensure that the condition given in <Ref> is met (which holds since the algorithm operates in a bottom up manner). Using the injectivity of g, and the fact that all subtree representations (of children) already admit a bijection, we can easily extend this to a bijection on all nodes C of γ. We have provided a parametrization of and and proven that they can compute representations which bijectively map to the codes of the algorithm for the respective components, effectively aligning algorithm with our encoders for these components. Given the completeness of the respective procedures in , we conclude that the encoders are also complete in terms of distinguishing the respective components. Having showed that a single layer parametrization of and is sufficient for distinguishing the biconnected and triconnected components, we proceed with the main lemma. Lemma 6.3. For a planar graph G = (V, E, ζ) of order n and its associated Block-Cut tree δ = (G), there exists a L = ⌈log_2(n) ⌉ + 1 layer parametrization of that computes a complete graph invariant for each subtree δ_u induced by each cut node u. recursively computes the representation for induced subtrees δ_u from cut nodes u, where δ = (G). Recall that in Block-Cut trees, the children of a cut node always represent a biconnected component, and the children of a biconnected component always represent a cut node. Therefore, it is natural to give the update formula for a cut node u in terms of the biconnected component B represented by u's children χ(u) and B's children χ(B). _δ_u^(ℓ) = ( _u^(ℓ-1) + ∑_B ∈χ(u)( _B^(ℓ) + ∑_v ∈χ(B)_δ_v^(ℓ)) ). To show the result, we first note that the canonical code given by the algorithm also operates in a bottom up fashion on the subtrees of δ. We have three cases: Case 1. For a leaf B in δ, the code for δ_B is given by (δ_B) = (B). This is because the leafs of δ are all biconnected components. Case 2. For a non-leaf biconnected component B in δ, we perform overrides for the codes associated with each child cut node, and then use . More precisely, we override the associated ({{u}, {}}) := (δ_u) for all u ∈χ(B), and then we compute (δ_B) = (B). Case 3. For a non-leaf cut node u in δ, we encode in a similar way to : we get the set of codes induced by the children of u in their lexicographical order. More precisely, if the lexicographical ordering of χ(u), based on (δ_B) for a given B ∈χ(u) is x[1], …, x[|χ(u)|], then the code for δ_u is given by: (δ_u)= ((δ_x[1])), …, (δ_x[|x|]) ) Instead of modelling the overrides (as in Case 2), learns the cut node representations. We first prove this result by giving a parametrization of which uses linearly many layers in n and then show how this construction can be improved to use logarithmic number of layers. Specifically, we will first show that can be parametrized to satisfy the following properties: * For every cut node u, we reserve a dimension in _u^L that stores the number of cut nodes in δ_u. This is done in the part of the corresponding layer. * There is a bijection λ between the representations of the induced subtrees and subtree codes, such that λ(_δ_u^(L))=(δ_u) and λ^-1((δ_u^(L))) = _δ_u^(L), for cut nodes u that have strictly less than L cut nodes in δ_u. * There is a bijection ρ between the cut node representations and subtree codes, such that ρ(_u^(L))=(δ_u) and ρ^-1((δ_u^(L))) = _u^(L), for cut nodes u that have strictly less than L cut nodes in δ_u. Observe that the property (2) directly gives us a complete graph invariant for each subtree δ_u induced by each cut node u, since the codes for every induced subtree are complete, and through the bijection, we obtain complete representations. The remaining properties are useful for later in order to obtain a more efficient construction. The function in every layer is crucial for our constructions, and we recall its definition: _u^(ℓ) = f^(ℓ)( g_1^(ℓ)(_u^(ℓ-1) + ∑_v ∈ N_u_v^(ℓ-1))   g_2^(ℓ)∑_v ∈ V_v^(ℓ-1)  g_3^(ℓ)(_u^(ℓ-1) + ∑_B ∈π_u^G_B^(ℓ))   g_4^(ℓ)(_u^(ℓ-1) + ∑_C ∈σ_u^G_C^(ℓ))   _δ_u^(ℓ)). We prove that there exists a parametrization of which satisfies the properties (1)-(3) by induction on the number of layers L. Base case. L = 1. In this case, there are no cut nodes satisfying the constraints, and the model trivially satisfies the properties (2)–(3). To satisfy (1), we can set the inner MLP in as the identity function, and the outer MLP as a function which adds 1 to the first dimension of the input embedding. This ensures that the representations _δ_u^(1) of cut nodes u have their first components equal to the number of cut nodes in δ_u. We can encode the property (1) in _u^(ℓ) using the representation _δ_u^(1), since the latter is a readout component in . Inductive step. L>1. By induction hypothesis, there is an (L-1)-layer parametrization of which satisfies the properties (1)–(3). We can define the L-th layer so that our L layer parametrization of satisfies all the properties: Property (1): For every cut node u, the function has a readout from _u^(L-1) and _δ_u^(L), which allows us to copy the first dimension of _u^(L-1) into the first dimension of _u^(L) using a linear transformation, which immediately gives us the property. Property (2): For this property, let us first consider the easier direction. Given the code of δ_u, we want to find the representation for the induced subtree of u. From the subtree code, we can reconstruct the induced subtree δ_u, and then run a L-layer on reconstructed Block-Cut Tree to find the representation. As for the other direction, we want to find the subtree code of given the representation of the induced subgraph. The encodes a multiset {{ (_B, {{_v | v ∈χ(B) }} )  |  B ∈χ(u) }}. By induction hypothesis, we know that all grandchildren v ∈χ^2(u) already have properties (1)–(3) satisfied for them with the first (L-1) layers. Hence, using the parametrization of and given in <Ref>, as part of the L-th layer, we can get a bijection between and biconnected component representation. This way we can obtain (B) for all the children biconnected components B ∈χ(u), with all the necessary overriding. Having all necessarily overrides is crucial, because to get the code for the cut node u, we need to concatenate the biconnected codes from <ref> that already have the required overrides. Hence, we make the parametrization of encode multisets of representations for biconnected components B ∈χ(u), and by similar arguments as in the proof of <Ref>, this can be done using the MLPs in . Property (3): Using the bijection from (2) as a bridge, we can easily show the property (3). In the update formula, we appended _δ_u^(L) using the MLP. If the MLP is bijective with the dimension taken by _δ_u^(L), we get a bijection between the node representation and the subtree representation. By transitivity, we get a bijection between node representations and subtree codes. This concludes our construction using L=O(n) layers. Efficient construction. We now show that the presented construction can be made more efficient, using only L = ⌈log_2(n) ⌉ + 1 layers. This is achieved by treating the cut nodes u differently based on the number of cut nodes they include in their induced subtrees δ_u. In this construction, the property (1) remains the same, but the properties (2)–(3) are modified: * For every cut node u, we reserve a dimension in _u^(L) that stored to the number of cut nodes in δ_u. This is done in the part of the corresponding layer. * There is a bijection λ between the representations of the induced subtrees and subtree codes, such that λ(_δ_u^(L))=(δ_u) and λ^-1((δ_u^(L))) = _δ_u^(L), for cut nodes u that have strictly less than 2^(L-1) cut nodes in δ_u. * There is a bijection ρ between the cut node representations and subtree codes, such that ρ(_u^(L))=(δ_u) and ρ^-1((δ_u^(L))) = _u^(L), for cut nodes u that have strictly less than 2^(L-1) cut nodes in δ_u. In this construction, we treat the cut nodes u which have strictly less than 2^L-1 cut nodes in their induced subtrees δ_u, differently than the other cut nodes. In the first case, we consider an arbitrary cut node u with less than 2^L-1 cut nodes in δ_u. In this case, the same arguments apply as before. As for the second case, we observe the following: there is at most one grandchild v_1 ∈χ^2(u) with at least 2^L-2 cut nodes in δ_v_1, as otherwise u would have at least 2^L-1 cut nodes in it. Using the same argument for v_1, there is at most one v_2 ∈χ^2(v_1) with at least 2^L-2 cut node in δ_v_2. By repeating this until there are no grandchildren with at least 2^L-2 cut nodes in their subtree, we can form a sequence of cut nodes u = v_0, v_1, …, v_k, such that v_i+1 is a grandchild of v_i for all 0 ≤ i < k. By induction hypothesis, all other cut nodes in δ_u already satisfy properties (2) and (3). Furthermore, we already know that the earlier parametrization ensures that the properties will hold for v_k. We will now extend this parametrization to ensure that the properties will also hold for the other nodes in this sequence, by modifying . Because of the bottom up nature of , when encoding v_i for 0 ≤ i < k, we will already have the new representation _δ_v_i+1^(L). Furthermore, since there is exactly one grandchild that does not satisfy the required properties, only one representation _B^(L) of a child biconnected component B ∈χ(v_i) will not satisfy properties (2) and (3) after L-1 layers. However, in , we also have access to the representations of enduced subtrees from cut nodes (in the summation before the inner MLP), so it suffices to encode this sequence of cut nodes separately, so that we will be able to distinguish _γ_v_i+1^(L) from the rest of the subtrees induced by grandchildren (because the new representation _γ_v_i+1^(L) already satisfies the properties). This can be done in a single , layer where the only difference between this construction and the less efficient one is that we need reserve one layer which is parametrized differently than the other layers to handle the cut nodes in this sequence. Theorem 6.1. For any planar graphs G_1 = (V_1, E_1, ζ_1) and G_2 = (V_2, E_2, ζ_2), there exists a parametrization of with at most L = ⌈log_2(max{|V_1|,|V_2|})⌉ + 1 layers, which computes a complete graph invariant, that is, the final graph-level embeddings satisfy _G_1^(L)≠_G_2^(L) if and only if G_1 and G_2 are not isomorphic. The “only if” direction is immediate because is an invariant model for planar graphs. To prove the “if” direction, we do a case analysis on the root of the two Block-Cut Trees. For each case, we provide a parametrization of such that _G_1^(L)≠_G_2^(L) for any two non-isomorphic graphs G_1 and G_2. A complete model can be obtained by appropriately unifying the respective parametrizations. *Case 1. (δ_1) and (δ_2) represents two cut nodes. Consider a parametrization of the final update formula, where only cut node representation is used, and a simplified readout function that only aggregates from the last layer. We can rewrite the readout for a graph in terms of the cut node representation from the last layer. _G^(L) = ( ∑_u ∈ V^G( ^(L)_δ_u ) ). Let (G) = {{_δ_u^(L) |  u ∈ V^G }}. Intuitively, (G) is a multiset of cut node representations from the last layer. We assume |V_δ_1| ≤ |V_δ_2| without loss of generality. Consider the root node (δ_2) of the Block-Cut Tree δ_2. By Lemma 6.3, we have _(δ_2) as a complete graph invariant with L layers. Since δ_2 cannot appear as a subtree of δ_1, _(δ_2)∉(G_1). Hence, (G_1) ≠(G_2). Since this model can define an injective mapping on the multiset (G) using similar arguments as before, we get that _G_1^(L)≠_G_2^(L). *Case 2. (δ_1) and (δ_2) represents two biconnected components. We use a similar strategy to prove Case 2. First, we consider a simplified model, where the update formula considers biconnected components only and the final readout aggregates from the last layer. We similarly give the final graph readout in terms of the biconnected component representation from the last layer. _G = ( ∑_u ∈ V^G( (_u^(L-1) + ∑_B ∈π_u^G_B^(L)) ) ). Let (G) = (_u^(L-1), {{_B^(L) |  B ∈π_u^G }}) |  u ∈ V^G. In Lemma 6.3, we prove that _B^(L) is also a complete invariant for the subtree rooted at B in the Block-Cut Tree. First, we show (G_1) ≠(G_2). As before, we assume |V_δ_1| ≤ |V_δ_2| without loss of generality. Consider how the biconnected component representation _(δ_2) appears in the two multisets of pairs. For G_2, there exist at least one node u and pair: (_u^(L-1), {{_B^(L) |  B ∈π_u^G }}) ∈(G_2), such that _(δ_2)∈{{_B^(L) |  B ∈π_u^G }}. However, because _(δ_2) is a complete invariant for δ_2 and δ_2 cannot appear as a subtree in δ_1, no such pair exists in (G_1). Given (G_1) ≠(G_2), we can parameterize the MLPs to define an injective mapping to get that _G_1^(L)≠_G_2^(L). *Case 3. (δ_1) represents a cut node and (δ_2) represents a biconnected component. We can distinguish G_1 and G_2 using a simple property of (δ_1). Recall that π_u^G represents the set of biconnected components that contains u, and χ^δ(C) represents the C's children in the Block-Cut Tree δ. For (δ_1), we have |π_u^G_1|=|χ^δ_1(u)|. However, for any other cut node u, including the non-root cut node in δ_1 and all cut nodes in δ_2, we have |π_u^G|=|χ^δ(u)|+1, because there must be a parent node v of u such that v ∈π_u^G_1 but v ∉χ^δ(u). Therefore, we consider a parametrization of a one-layer model that exploits this property. In , we have a constant vector [1,0]^⊤ for all biconnected components. In , we learn [0, |χ_u|]^⊤ for all cut nodes u. In the update formula, we have [|π_u^G| - |χ^δ(u)| - 1, 0]^⊤. All of the above specifications can be achieved using linear maps. For the final readout, we simply sum all node representations with no extra transformation. Then, for (δ_1), we have _δ_u = [-1,0]^⊤. For any other cut node u, we have _u = [0, 0]^⊤. For all non-cut nodes u, we also have _u = [0, 0]^⊤ because |π_u^G|=1 and _δ_u = [0,0]^⊤. Summing all the node representations yields _G_1 = [-1,0]^⊤ but _G_2=[0,0]^⊤. Hence, we obtain _G_1^(L)≠_G_2^(L), as required. § FURTHER EXPERIMENTAL DETAILS §.§ Link to code The code for our experiments, as well as instructions to reproduce our results and set up dependencies, can be found at this anonymized GitHub repository: <https://github.com/ZZYSonny/PlanE> §.§ Computational resources We run all experiments on 4 cores from Intel Xeon Platinum 8268 CPU @ 2.90GHz with 32GB RAM. In Table <ref>, we report the approximate time to train a model on each dataset and with each tuned hidden dimension value. §.§ Distinguishing graphs on EXP Experimental setup. In this experiment, we evaluate the model on the planar EXP benchmark <cit.> and compare with standard MPNNs, as well as MPNNs with random node initialization and higher-order GNNs. The EXP dataset consists of a set of planar graphs which each represent a satisfiability problem (SAT) instance. These instances are grouped into pairs, such that these pairs cannot be distinguished by 1-WL, and lead to different SAT outcomes. The task in this dataset is to predict the satisfiability of each instance. Therefore, to obtain above-random performance on this dataset, a model must have a sufficiently strong expressive power (2-WL or more). To conduct this experiment, we use a 2-layer model with 64-dimensional node embeddings. We instantiate the tri-connected component encoder with 16-dimensional positional encodings, each computed using a periodicity of 64. We follow the same protocol as the original work: we use 10-fold cross validation on the dataset, train on each fold for 50 epochs using the Adam <cit.> optimizer with a learning rate of 10^-3, and binary cross entropy loss. r5.75cm Accuracy results on the EXP synthetic benchmark. Baselines are as reported in the original paper <cit.>. Model Accuracy (%) GCN-RNI(N) 98.0± 1.85 1-2-3-GCN-L 50.0 3-GCN 99.7± 0.004 100.0± 0.0 Results. The results of PlanE and other baselines on the EXP dataset are reported in <Ref>. As expected, PlanE almost perfectly solves the task, achieving a performance exceeding 99% despite not using any external components, e.g., random features, or computationally intractable methods, e.g. higher-order GNNs. In fact, the model solely relies on classical algorithm component decompositions, and does not rely on explicitly selected and designed features, to achieve this performance gain. Therefore, this experiment highlights that the general algorithmic decomposition effectively improves expressiveness in a practical machine learning setup, and leads to strong PlanE performance on EXP, where a standard MPNN would otherwise fail. §.§ architecture builds on , and additionally processes edge features within the input graph. To this end, it supersedes the original update equations for SPQR components and nodes with the following analogs: _C^(ℓ) = ( ∑_i=1^|ω|( ^(ℓ-1)_ω[i]^(ℓ-1)_ω[i],ω[(i+1)%|w|]_κ[i]_i ) )), and _u^(ℓ) = f^(ℓ)( g_1^(ℓ)(_u^(ℓ-1) + ∑_v ∈ N_u g_5^(ℓ)(_v^(ℓ-1)_v,u^(ℓ-1)) )   g_2^(ℓ)∑_v ∈ V_v^(ℓ-1) g_3^(ℓ)(_u^(ℓ-1) + ∑_B ∈π_u^G_B^(ℓ))   g_4^(ℓ)(_u^(ℓ-1) + ∑_T ∈τ_u^G_T^(ℓ))   _δ_u^(ℓ)), where _i,j=_i,j if (i,j) ∈ E and is the ones vector (1^d) otherwise §.§ Training setups Clustering coefficient of QM9 graphs. To train all baselines, we use the Adam optimizer with a learning rate from {10^-3; 10^-4}, and train all models for 100 epochs using a batch size of 256 and L2 loss. Graph classification on MolHIV. We instantiate with an embedding dimension of 64 and a positional encoding dimensionality of 16. We further tune the number of layers within the set {2, 3} and use a dropout probability from the set {0, 0.25, 0.5, 0.66}. Furthermore, we train our models with the Adam optimizer <cit.>, with a constant learning rate of 10^-3. Finally, we perform training with a batch size of 256 and train for 300 epochs. Graph regression on QM9. As standard, we train using mean squared error (MSE) and report mean absolute error (MAE) on the test set. For training , we tune the learning rate from the set {10^-3, 5×10^-4} with the Adam optimizer, and adopt a learning rate decay of 0.7 every 25 epochs. Furthermore, we use a batch size of 256, 128-dimensional node embeddings, and 32-dimensional positional encoding. Graph regression on ZINC. In all experiments, we use a node embedding dimensionality from the set {64, 128}, use 3 message passing layers, and 16-dimensional positional encoding vectors. We train both and with the Adam optimizer <cit.> using a learning rate from the set {10^-3, 5×10^-4, 10^-4}, and follow a decay strategy where the learning rate decays by a factor of 2 for every 25 epochs where validation loss does not improve. We train using a batch size of 256 in all experiments, and run training using L1 loss for 500 epochs on the ZINC subset, and for 200 epochs on the full ZINC dataset.
http://arxiv.org/abs/2307.01077v1
20230703145511
Supervised Manifold Learning via Random Forest Geometry-Preserving Proximities
[ "Jake S. Rhodes" ]
stat.ML
[ "stat.ML", "cs.LG" ]
Supervised Manifold Learning via Random Forest Geometry-Preserving Proximities* Jake S. Rhodes Department of Statistics Brigham Young University Provo, Utah, USA [email protected] August 1, 2023 =============================================================================================================== Manifold learning approaches seek the intrinsic, low-dimensional data structure within a high-dimensional space. Mainstream manifold learning algorithms, such as Isomap, UMAP, t-SNE, Diffusion Map, and Laplacian Eigenmaps do not use data labels and are thus considered unsupervised. Existing supervised extensions of these methods are limited to classification problems and fall short of uncovering meaningful embeddings due to their construction using order non-preserving, class-conditional distances. In this paper, we show the weaknesses of class-conditional manifold learning quantitatively and visually and propose an alternate choice of kernel for supervised dimensionality reduction using a data-geometry-preserving variant of random forest proximities as an initialization for manifold learning methods. We show that local structure preservation using these proximities is near universal across manifold learning approaches and global structure is properly maintained using diffusion-based algorithms. supervised learning, manifold learning, random forest, data visualization, data geometry § INTRODUCTION Manifold learning algorithms are often used for exploratory data analysis. They are typically applied to noisy data in an attempt to find meaningful patterns or relationships across time, classes, or variables <cit.>. Most manifold learning approaches are unsupervised in that they do not use auxiliary information (e.g., data labels) in the embedding construction process. In many contexts, only unsupervised models are applicable as auxiliary information can be expensive or inaccessible. However, when available, label information can provide valuable insights into the data's intrinsic structure relative to the labels. Subjecting the embedding process to the use of auxiliary information can help to uncover a data geometry unattainable without labels. In this paper, we discuss weaknesses of current supervised manifold-learning approaches and show improvements on existing methods by applying a new variant of a random forest-based <cit.> similarity measure in a manifold-learning setting. We use Geometry- and Accuracy-Preserving proximities (RF-GAP <cit.>) and demonstrate their ability to meaningfully encode a similarity measure and subsequent embedding that naturally incorporates labels. As opposed to distance or similarity measures which condition upon class labels to artificially exaggerate the separation of points of opposing classes, random forest proximities serve as a measure of similarity that uses labels (continuous, categorical, or otherwise) in a manner consistent with the model's learning. Additionally, forest-based proximities appropriately denoise the data, providing a meaningful metric or graph for the embedding process. § SUPERVISED MANIFOLD LEARNING Manifold-learning algorithms use distance or similarity graphs to encode local data structure. For example, Isomap forms a k-nearest neighbor (k-NN) graph using Euclidean distance and seeks the shortest path between observations to approximate true geodesic distances upon which multi-dimensional scaling is applied <cit.>. Diffusion Map (DM) uses a Gaussian kernel applied to a k-NN graph to form local similarities upon which eigendecomposition is applied <cit.>. T-distributed Stochastic Neighbor Embedding, or t-SNE, estimates probabilities as a normalized Gaussian kernel to define similarities between points. The Kullback-Leibler (KL) divergence between these and a lower-dimensional mapping is estimated via gradient descent to form the target embedding <cit.>. Each of these methods is unsupervised; data labels are not used in any part of the embedding process. However, supervised variants of these methods have been developed. Most of these supervised extensions of the algorithms adapt the existing algorithm at the distance- or similarity-learning level. In some cases, distances are rescaled <cit.>, additively incremented <cit.>, or otherwise adapted conditionally upon class association <cit.>. Often, these dissimilarity measures can provide perfect linear separation where such discrimination is not possible using traditional classifiers. See Equation <ref>, for an example of a class-conditional dissimilarity, where D(.,.) denotes a distance function (e.g., Euclidean), β is usually set as the average distance between points, α lessens separation between similar points of opposing classes, and y_i, y_j are the respective labels of x_i and x_j. This dissimilarity has been used to create supervised variants of t-SNE <cit.>, Isomap <cit.>, Locally-Linear Embedding <cit.>, and Laplacian Eigenmaps <cit.>. In each of these extensions, the within-class structure is partially maintained, but manifold structures are distorted at a global level as a result of exaggerated class separation. Such dissimilarity measures are order non-preserving bijections <cit.>. D'(𝐱_i, 𝐱_j) = {[ √(1-e^-D^2(𝐱_i, 𝐱_j)/β) y_i=y_j; √(e^D^2(𝐱_i, 𝐱_j)/β)-α y_i≠ y_j ]. These approaches are problematic in several ways: (1) The class-conditional distances form an attempt to maintain with-class structure but cause disruption between classes. Artificial class separation diminishes inter-class relationships, thus distorting the global data structure. (2) The manifold disruption reduces the integrity of resulting downstream tasks. For example, classification tasks following dimensionality reduction can have unrealistically low error rates. (3) Class-conditional measures do not provide an avenue for continuously-valued labels. These extensions have not been adapted to regression problems. (4) These approaches are not extendable to new, unlabeled points (e.g., a test set used for subsequent predictions). To overcome each of these weaknesses, we propose the use of random forests <cit.> to generate supervised similarities to be used in manifold learning. § RANDOM FOREST PROXIMITIES Random forests <cit.> provide a number of benefits for prediction problems that are supportive of metric learning. For example, random forests apply to both regression and classification problems, handle mixed variable types, provide an unbiased estimate of the generalization error, are insensitive to monotonic transformations, are relatively robust to outliers, and provide a natural way of assessing variable importance, ignoring noise variables in the presence of meaningful ones. Random forests form an ensemble in binary-recursive decision trees each of which partitions a bootstrap sample of the training data. Observations within the bootstrap sample are called in-bag, while those in the training data not included in the sample are called out-of-bag, or OOB. Each partition forms a decision space used for classification or regression. The partitions naturally form a channel for generating similarity measures using data labels. These similarities are referred to in the literature as random forest proximities. Leo Breiman originally defined the proximity between two observations, x_i, and x_j, denoted by p(x_i, x_j), as the number of terminal nodes they share across all trees, divided by the number of trees in the forest <cit.>. This simple approach applies to all training points regardless of bootstrap status. As a result, proximities constructed on the training set tend to slightly overemphasize class segregation. To overcome this, an alternative formulation was derived to only calculates pair-wise similarities between points x_i and x_j using trees in which both of these observations are OOB <cit.>. Subsequently, these proximities combat overinflated class separation. However, it has been shown that OOB-only proximities do not fully benefit from the random forest's learning and are a noisier similarity measure <cit.>. In <cit.>, the authors demonstrated that random forests behave like a nearest-neighbor regressor with an adaptive bandwidth. That is, random forest predictions (in the regression context) can be determined as a weighted sum using a kernel function, as shown in Equation <ref>. ŷ_i(k) = ∑_j i k(x_i, x_j)y_j Here, k is a weighted kernel function determined by the number of training examples sharing a terminal node with x_i. This is comparable to other kernel methods, such as the SVM, which uses a kernel to define similarity and ultimately the decision space. Ideally, random forest proximities should serve as a kernel capable of mimicking random forest predictions. Using normalized proximities as weights can serve as a test for the proximities' consistency with the forest's learning. In this regard, existing random forest proximity formulations do not adequately incorporate the forest's learning <cit.>. Both the original formulation <cit.>, as well as that using only OOB observations <cit.>, are not capable of reconstructing the random forest predictions for two reasons: (1) Random forests train on a set of bootstrap (in-bag) samples and predict on another set (the OOB samples or a test set). The original formulation doesn't discriminate between in-bag or OOB observations, and the OOB proximity definition does not use in-bag samples in their construction. (2) Decision tree voting takes into account the number of in-bag observations within a given terminal node, while the proximities are constructed without regard to the number of “voting points" <cit.>. To construct proximities that serve as a kernel for random forest prediction, these two points must be accounted for. In <cit.>, the authors propose a new proximity formulation capable of reconstructing random forest OOB and test predictions as a kernel method. They call these proximities Random Forest-Geometry- and Accuracy-Preserving proximities (RF-GAP) and show improvement across multiple applications using this new definition, including data imputation, outlier detection, and visualization via MDS. The RF-GAP proximities, however, do not form a proper kernel function as originally defined. They are asymmetric and self-similarity is defined to be 0 to account for the kernel prediction problem. To overcome this, we normalize the similarities to set the maximum similarity to 1, symmetrize the proximities, and define the diagonal entries to be 1. In doing so, the proximities can serve in any capacity which requires a kernel matrix. Using this modified RF-GAP formulation, these proximities serve as a similarity measure that overcomes the weaknesses of the class-conditional supervised distances in the following ways: (1) Rather than conditionally adapting an existing distance measure and thereby distorting the global data structure, random forest proximities provide a measure of local similarity which partially retains global information through the forest's recursive splitting process. Therefore, instead of exaggerating class separation, natural observational relationships are retained, as can be seen in low-dimensional visual representations (see Figure <ref>). (2) Proximities formed using OOB data points retain the random forest's learning, thus, downstream task integrity is not jeopardized but relevant supervised information is retained. (3) Random forests are not limited to classification problems but also work with continuous labels. This provides an avenue for supervised metric learning in a regression context, as shown in Figure <ref>. (4) A trained random forest model can extend similarity measures to unlabeled or out-of-sample observations, providing a means for semi-supervised metric learning or subsequent prediction. Additionally, noise variables are not likely to be used for splitting unless relevant variables are not included in the random subset of splitting variables. Subsequently, generated proximities naturally account for variable importance and can serve as a means of denoising. We show that supervised manifold learning methods generally improve with the use of RF-GAP proximities. We compare the embedding mapping using common manifold learning algorithms including Isomap, t-SNE, Diffusion Map, Laplacian Eigenmaps, UMAP, PHATE, MDS, and Kernelized-PCA. Visualizations for each of these can be found in Appendix <ref>. Additionally, we demonstrate that diffusion-based embeddings generated using RF-GAP proximities better retain the random forest's learning in low dimensions, as shown in Figure <ref>. § RANDOM FOREST-BASED MANIFOLD LEARNING Local connectivity is encoded via a distance metric. A kernel function, (e.g., Gaussian kernel) can be applied to the graph distances to provide a local measure of similarity between observations from which global relationships can be learned. For example, in diffusion processes, a stochastic matrix, or diffusion operator, is formed by row-normalizing the pair-wise similarities. The global structure is learned by powering the diffusion operator, simulating a random walk between observations. The quality of learned embedding is highly dependent on the kernel construction as well as global-structure mapping. Unlike an unsupervised kernel function, random forest proximities form noise-resilient, locally-adaptive neighborhoods ensuring that subsequent embeddings are constructed in a manner relevant to and consistent with the data labels. Similarities between points of different classes are still reflected in the proximity values, whereas this inter-class preservation is lost or diminished in class conditional measures such as the one given by Equation <ref>. Continuous labels can also be reflected in the embedding using random forest proximities. We provide an example in Figure <ref>. In this figure, embeddings are colored both by the life expectancy (the target label) as well as the country's economic status (developed vs. developing). It is clear that the unsupervised embeddings create separate clusters for each economic status, while the RF-PHATE embedding shows a continuum consistent with life expectancy in lower dimensions. § QUANTIFYING RESULTS We use two methods to evaluate the low-dimensional embeddings in the supervised context. The first approach determines the extent to which the embedding can be used for the original classification problem. To this end, we use the 2-dimensional embeddings of unsupervised, supervised, and RF-GAP-based models to train a k-NN classifier to predict the original labels and compare the accuracy with that of a model trained on the full dataset. All accuracies were averaged using leave-one-out cross-validation and the overall results were aggregated across all datasets given in Table <ref>. Ideally, the difference in accuracies should be minimal, that is, we want to retain useful information without overfitting. In Figure <ref>, we see that the unsupervised embeddings produce accuracies lower than those using the full datasets, demonstrating a loss of information in low dimensions. The class-conditioned, supervised approaches tend to overfit the labels, generally producing much higher accuracies (near-perfect, in some examples) as a result of artificial class separation. The RF-GAP-based embeddings have accuracies more consistent with models trained on the full dataset, though results vary by method. The slightly higher values for the majority of RF-GAP methods can be accounted for by the superior predictive power of random forests over k-NN. We show in Figure <ref> that the k-NN predictive accuracy of the RF-GAP embeddings typically aligns very well with the OOB score of the forest which generated the proximities, demonstrating the embeddings' abilities to retain the forest's learning in low dimensions. The second evaluation technique provides an assessment of the hierarchy of variable importance retained in the embedding. In a supervised context, variables that provide higher class-discriminatory power are considered to be more important. For this evaluation, we first determine a permutation-based variable importance score using a k-NN classification model on the original supervised task. We then produce a second set of variable importance scores by regressing onto the embedding using a k-NN model trained on the original dataset. We calculate the correlation between the two variable importance scores. In Figure <ref>, we see that supervised models generally outperform unsupervised models in retaining variable importance, bearing in mind that class-conditional methods inflate class discrimination. Diffusion-based RF-GAP methods tend to perform best with this metric, suggesting they better preserve global, hierarchical variable importance. § CONCLUSION In this paper, we discussed the weaknesses of existing supervised manifold learning methods. We showed that RF-GAP-based manifold learning methods preserve local structure while maintaining global structure relative to data classes as can be seen in scatterplots of the embeddings. The visual quality of the RF-GAP embedding depends on the method used, but variable importance is maintained in low dimensions regardless of the method. Diffusion-based RF-GAP embeddings tend to retain the random forest's learning in low dimensions, suggesting that such methods better maintain the integrity of the kernel from which the embeddings were derived. IEEEtran figuresection tablesection § ADDITIONAL EXPERIMENTAL RESULTS The RF-GAP proximities perfectly preserve the random-forest predictive ability as was shown in <cit.>. In Figure <ref>, we show that the predictive ability is well-preserved in low dimensions. In the figure, we see the diffusion-based methods as well as RFTSNE and RFUMAP methods nearly perfectly preserve the OOB score when using a k-NN classifier on the embeddings. The following figures provide examples to compare unsupervised, supervised, and RF-GAP-based methods. In most examples, the supervised models create near-linear separation between classes, while RF-GAP methods preserve relationships meaningful to the supervised task without global structure disruption. § DATASETS USED
http://arxiv.org/abs/2307.03040v1
20230706150542
Distributed Interior Point Methods for Optimization in Energy Networks
[ "Alexander Engelmann", "Michael Kaupmann", "Timm Faulwasser" ]
math.OC
[ "math.OC", "cs.SY", "eess.SY" ]
First]Alexander Engelmann First]Michael Kaupmann First]and Timm Faulwasser [First]Institute for Energy Systems, Energy Efficiency and Energy Economics, TU Dortmund University, Dortmund, Germany (e-mail: {alexander.engelmann, timm.faulwasser}@ieee.org). This note discusses an essentially decentralized interior point method, which is well suited for optimization problems arising in energy networks. Advantages of the proposed method are guaranteed and fast local convergence also for problems with non-convex constraints. Moreover, our method exhibits a small communication footprint and it achieves a comparably high solution accuracy with a limited number of iterations, whereby the local subproblems are of low computational complexity. We illustrate the performance of the proposed method on a problem from energy systems, i.e., we consider an optimal power flow problem with 708 buses. Distributed Optimization, Decentralized Optimization, Interior Point Method § INTRODUCTION Distributed and decentralized optimization algorithms are key for the optimal operation of networked systems.[ We refer to an optimization algorithm as being distributed if one has to solve a (preferably cheap) coordination problem in a central entity/coordinator. We denote an optimization algorithm as being decentralized in absence of such a coordinator and when the agents rely purely on neighbor-to-neighbor communication <cit.>. We call an algorithm essentially decentralized if it has no central coordination but requires a small amount of central communication. Note that the definition of distributed and decentralized control differs <cit.>. ] Application examples range from power systems <cit.>, via optimal operation of gas networks networks <cit.>, to distributed control of data networks <cit.>. Classical distributed optimization algorithms used in the above works are, however, typically guaranteed to converge only for problems with convex constraints. Moreover, sufficiently accurate models are often non-linear leading to problems with non-convex constraints. Thus, researchers either apply classical methods without convergence guarantees in a heuristic fashion <cit.>, or they rely on simplified convex models <cit.>. From an operations point of view, both approaches come with the risk of an unstable system operation. Moreover, classic approaches achieve linear convergence rates in the best case <cit.>. <cit.>, <cit.>, and <cit.> propose distributed second-order methods with fast—i.e. superlinear—convergence guarantees for non-convex problems. These approaches rely on the exchange of quadratic models of the subproblems, which in turn implies a substantial need for communication and/or central coordination. In <cit.> we have shown how to overcome quadratic model exchange by a combination of active set methods and techniques from inexact Newton methods. However, in practice the detection of the correct active set is difficult and can be numerically unstable. In two recent papers, we have shown how to decompose interior point methods in an essentially decentralized fashion, i.e., decomposition is achieved without relying on any central computation <cit.>. We do so by combining interior point methods with decentralized inner algorithms for solving the Newton step. Interior point methods have the advantage that an active set detection is avoided while fast—i.e. superlinear—local convergence can be guaranteed for non-convex problems. This note considers the application of the essentially decentralized interior point method (d-IP) to the optimal power flow problem which arises frequently in power systems. § PROBLEM FORMULATION A common formulation of optimization problems in the context of networked systems is min_x_i,…,x_|𝒮| ∑_i ∈𝒮 f_i(x_i) subject to g_i(x_i) =0, ∀ i ∈𝒮, h_i(x_i) ≤ 0, ∀ i ∈𝒮, ∑_i ∈𝒮 A_ix_i = b, where, 𝒮={1,…, |𝒮|} denotes a set of subsystems, each of which is equipped with an objective function f_i:ℝ^n_i→ℝ and equality and inequality constraints g_i, h_i:ℝ^n_i→ℝ^n_gi,ℝ^n_hi. The matrices A_i ∈ℝ^n_c × n_i and the vector b∈ℝ^n_c are coupling constraints between the subsystems. § A DISTRIBUTED INTERIOR POINT METHOD Interior point methods reformulate problem (<ref>) via a logarithmic barrier function and slack variables v_i∈ℝ^n_hi, min_x_1,…,x_|𝒮|,v_1,…,v_|𝒮| ∑_i ∈𝒮 f_i(x_i) - 1^⊤δln (v_i) subject to g_i(x_i) =0, ∀ i ∈𝒮, h_i(x_i) +v_i = 0, v_i ≥ 0, ∀ i ∈𝒮, ∑_i ∈𝒮 A_ix_i = b . The variable δ∈ℝ_+ is a barrier parameter, 1= (1,…,1)^⊤∈ℝ^n_hi and the function ln(·) is evaluated component-wise. Note that the inequality constraints are replaced by barrier functions. Moreover, (<ref>) and (<ref>) share the same minimizers for δ→ 0. The main idea of interior point methods is to solve (<ref>) for a decreasing sequence of δ. It is often too expensive to solve (<ref>) to full accuracy—hence one typically performs a hand full Newton steps only <cit.>. In this note we use a variant which computes only one Newton step per iteration. Next, we give a brief summary of distributed interior point methods; details are given in <cit.>. §.§ Decomposing the Newton Step An exact Newton step ∇ F^δ(p)Δ p = - F^δ (p) applied to the first-order optimality conditions F^δ (p)=0 of (<ref>) reads [ ∇ F_1^δ 0 … Ã_1^⊤; 0 ∇ F_2^δ … Ã_2^⊤; ⋮ ⋮ ⋱ ⋮; Ã_1 Ã_2 … 0 ][ Δ p_1; Δ p_2; ⋮; Δλ ] = [ -F_1^δ; -F_2^δ; ⋮; b -∑_i ∈𝒮 A_i x_i ], where ∇ F_i^δ = [ ∇_xx L_i 0 ∇ g_i(x_i)^⊤ ∇ h_i(x_i)^⊤; 0 -V_i^-1 M_i 0 I; ∇ g_i(x_i) 0 0 0; ∇ h_i(x_i) I 0 0; ] , M_i = diag(μ_i), and Ã_i = [ A_i 0 0 0 ], cf. <cit.>. Here, p=(p_1,…,p_|𝒮|,λ) and p_i = ( x_i, v_i, γ_i, μ_i ), where γ_i, μ_i, and λ are Lagrange multipliers assigned to (<ref>), (<ref>), and (<ref>) respectively. Note that the optimality conditions F^δ (p)=0 are parameterized by the barrier parameter δ. The coefficient matrix in (<ref>) has an arrowhead structure which we exploit for decomposition. Note that each ∇ F_i^δ can be computed based on local information only. Assume that ∇ F_i^δ is invertible. Then, one can reduce the KKT system (<ref>) by solving the first S block-rows for Δ p_i. Hence, Δ p_i = - (∇ F_i^δ )^-1 (F_i^δ + Ã_i^⊤Δλ ) for all i∈𝒮. Inserting (<ref>) into the last row of (<ref>) yields (∑_i ∈𝒮Ã_i (∇ F_i^δ )^-1Ã_i^⊤ ) Δλ = (∑_i ∈𝒮 A_i x_i - Ã_i (∇ F_i^δ )^-1F_i^δ) - b. Define S_i ≐ Ã_i (∇ F_i^δ )^-1Ã_i^⊤, and s_i ≐ A_i x_i - Ã_i (∇ F_i^δ )^-1F_i^δ -1|𝒮| b. Then, equation (<ref>) is equivalent to (∑_i ∈𝒮 S_i ) Δλ - ∑_i ∈𝒮 s_i = S Δλ -s =0. Observe that once (<ref>) is solved, one can compute Δ p_1,…,Δ p_|𝒮| locally in each subsystem based on Δλ via back-substitution into (<ref>). This way, we are able to solve (<ref>) in a hierarchically distributed fashion, i.e., we first compute (S_i,s_i) locally for each subsystem and then collect (S_i,s_i) in a coordinator. One continues by solving (<ref>) and distributing Δλ back to all subsystems i ∈𝒮, which in turn use (<ref>) to recover Δ p_i. §.§ Decentralization Solving (<ref>) in a central coordinator is typically not preferred due to the large amount of information exchange for large-scale systems and due to safety reasons. Hence, we solve (<ref>) in a decentralized fashion without central computation via decentralized inner algorithms. One can show that S is symmetric and positive-semidefinite. Hence, one can apply a decentralized version of the conjugate gradient method (d-CG) <cit.>. Alternatively, one can also use decentralized optimization algorithm by reformulating (<ref>) as a convex optimization problem. Typically it is expensive in terms of communication and computation to solve (<ref>) to full accuracy by inner algorithms. Thus, we use techniques from inexact Newton methods to terminate the inner algorithms early based on the violation of the optimality conditions F^δ (p)=0, cf. <cit.> . Doing so, one can save a large amount of inner iterations—especially in early outer iterations. When F^δ(p^k) gets closer to zero, we also force the residual of (<ref>) to become smaller to guarantee convergence to a minimizer. §.§.§ Updating Stepsize and the Barrier Parameter The barrier parameter δ and the stepsize α in the for the Newton step p^k+1= p^k + αΔ p^k requires a small amount of central communication but no central computation. Indeed, it is possible to compute local surrogates {α_i}_i∈𝒮 and {δ_i}_i∈𝒮 and take their minimal/maximal values over all subsystems to obtain (α,δ). §.§.§ The Overall Algorithm The overall distributed interior point algorithm is summarized in Algorithm <ref>. Algorithm <ref> has local superlinear convergence guarantees for non-convex problems in case the barrier parameter and the residual in (<ref>) decrease fast enough, cf. <cit.>. § APPLICATION TO OPTIMAL POWER FLOW Optimal Power Flow (OPF) problems aim at finding optimal generator set-points in power systems while meeting grid constraints and technical limits <cit.>. The basic AC OPF problem reads min_s,v ∈ℂ^N f(s) subject to s- s^d = diag (v) Y v^*, p≤re(s) ≤p̅, q≤im(s) ≤q̅, v≤abs(v) ≤v̅, v^1 = v^s. Here, v ∈ℂ^N are complex voltages, and s ∈ℂ^N are complex power injections at all buses N. The operators re(·) and im(·) denote the real part and imaginary part of a complex number, and (·)^* denotes the complex conjugate. The objective function f:ℂ^N→ℝ encodes the cost of power generation. The grid physics are described via the power flow equations (<ref>), where Y ∈ℂ^N× N is the complex bus-admittance matrix describing grid topology and parameters. Moreover, s^d ∈ℂ^N is a fixed power demand. The constraints (<ref>) describe technical limits on the power injection by generators, and (<ref>) models voltage limits. The second equation in (<ref>) is a reference condition on the voltage at the first bus, v^1, where the complex voltage is constrained to a reference value v^s. Note that one can reformulate the OPF problem (<ref>) in form of (<ref>) by introducing auxiliary variables. Different variants of doing do exist; here we rely on a reformulation from <cit.>. §.§ A case study As a case study, we consider 6 interconnected IEEE 118-bus test systems shown in Fig. <ref>. Each of these systems corresponds to one subsystem i∈𝒮 in problem (<ref>). We use grid parameters from , and we interconnect the subsystems in an asymmetric fashion to generate non-zero flows at the interconnection points. In total, we get an optimization problem with about 3.500 decision variables. Fig. <ref> depicts the convergence of Algorithm <ref> over the iteration index k with algorithm parameters from <cit.>. The figure depicts the consensus violation Ax^k-b_∞, which can be interpreted as the maximum mismatch of physical values at boundaries between subsystems. Furthermore, the relative error in the objective function |f^k-f^⋆|/f^⋆, the infeasibilities g(x^k)_∞ and max(0,h(x^k))_∞, the distance to the minimizer x^k-x^⋆_∞, the number of inner iterations of d-CG, the barrier parameter sequence {δ^k}, and the primal and dual step size (α^p,α^d) are shown. The centralized solution x^⋆ is computed via the open-source solver <cit.>. One can observe that the consensus violation is at the level of 10^-5 for all iterations. This means that the iterates are feasible with respect to the power transmitted over transmission lines. This results from the fact that the consensus constraint (<ref>) is implicitly enforced when solving (<ref>) via d-CG. A low consensus violation has the advantage, that one can terminate d-IP early and apply one local NLP iteration to obtain a feasible but possibly suboptimal solution.[Assuming that the local OPF problem is feasible for the current boundary value iterate.] We note that feasibility is typically of much higher importance than optimality in power systems, since feasibility ensures a safe system operation, cf. Remark <ref>. From g(x^k)_∞ and max(0,h(x^k))_∞[The blank spots in the plot for max(0,h(x^k))_∞ correspond to zero values, since log(0)=-∞.] in Fig. <ref> one can see that feasibility is ensured to a high degree after 20-30 dIP iterations. At the same time we reach a suboptimality level of almost 0.01 %, which is much smaller than in other works on distributed optimization for OPF, cf. <cit.>. Moreover, one can see that the distance to the minimizer x^k-x^⋆_∞ is still quite large due to the small sensitivity of f with respect to the reactive power inputs. This is also common in OPF problems. Regarding Algorithm <ref> itself, one can see that the barrier parameter δ steadily decreases in each iteration. Moreover, during the first 20 iterations, comparably small step-sizes are used. The domain of local convergence is reached after around 30 iterations. Note that we use different stepsizes α_p for the primal variables and α_d for the dual variables. Observe that due to the dynamic termination of inner d-CG iterations based on the inexact Newton theory, Algorithm <ref> requires a small amount of inner iterations in the beginning and the number of iterations have to increase when coming closer to a local minimizer. This saves a substantial amount of inner iterations compared to a fixed inner termination criterion. The widely used Alternating Direction Method of Multipliers (ADMM) does not converge for the considered case. This seems to occur rarely, but was also reported in other works <cit.>. Algorithm <ref> requires 25 seconds for performing 35 iterations with serial execution. The solver needs about 13 seconds when applied to the distributed formulation and 2 seconds when applied to the centralized problem formulation. Executing 497 ADMM iterations—this reflects the number of d-CG iterations in Algorithm <ref>— requires 210 seconds with serial execution. This illustrates the large computation overhead of ADMM in the local steps, since here one has to solve an NLP in each iteration and for each subsystem. In contrast, d-IP only needs to perform one matrix inversion every outer iteration. All simulations are performed on a standard state-of-the-art notebook. Note that feasibility in a range of 10^-3 to 10^-5 is typically sufficient for a safe power system operation. The parameters in the OPF problem (<ref>), such as power demands and line parameters, induce uncertainty to the problem, which is typically much larger than this level <cit.>. Hence, in applications there is typically little-to-no benefit in solving OPF problems to machine precision. § SUMMARY & OUTLOOK We have presented an essentially decentralized interior point method for distributed optimization in energy networks with advantageous properties in terms of convergence guarantees, communication footprint, and practical convergence. We have illustrated the performance of our method on a 708-bus case study. Future work will consider improvements in implementation aspects of d-IP, where we aim faster execution times and at scalability up to several thousand buses.
http://arxiv.org/abs/2307.01837v1
20230704172950
Critical dynamical behavior of the Ising model
[ "Zihua Liu", "Erol Vatansever", "Gerard T. Barkema", "Nikolaos G. Fytas" ]
cond-mat.stat-mech
[ "cond-mat.stat-mech" ]
[email protected] Department of Information and Computing Sciences, Utrecht University, Princetonplein 5, 3584 CC Utrecht, the Netherlands [email protected] Department of Physics, Dokuz Eylül University, TR-35160, Izmir, Turkey Centre for Fluid and Complex Systems, Coventry University, Coventry, CV1 5FB, United Kingdom [email protected] Department of Information and Computing Sciences, Utrecht University, Princetonplein 5, 3584 CC Utrecht, the Netherlands [email protected] Centre for Fluid and Complex Systems, Coventry University, Coventry, CV1 5FB, United Kingdom We investigate the dynamical critical behavior of the two- and three-dimensional Ising model with Glauber dynamics. In contrast to the usual standing, we focus on the mean-squared deviation of the magnetization M, MSD_M, as a function of time, as well as on the autocorrelation function of M. These two functions are distinct but closely related. We find that MSD_M features a first crossover at time τ_1 ∼ L^z_1, from ordinary diffusion with MSD_M ∼ t, to anomalous diffusion with MSD_M ∼ t^α. Purely on numerical grounds, we obtain the values z_1=0.45(5) and α=0.752(5) for the two-dimensional Ising ferromagnet. Related to this, the magnetization autocorrelation function crosses over from an exponential decay to a stretched-exponential decay. At later times, we find a second crossover at time τ_2 ∼ L^z_2. Here, MSD_M saturates to its late-time value ∼ L^2+γ/ν, while the autocorrelation function crosses over from stretched-exponential decay to simple exponential one. We also confirm numerically the value z_2=2.1665(12), earlier reported as the single dynamic exponent. Continuity of MSD_M requires that α(z_2-z_1)=γ/ν-z_1. We speculate that z_1 = 1/2 and α = 3/4, values that indeed lead to the expected z_2 = 13/6 result. A complementary analysis for the three-dimensional Ising model provides the estimates z_1 = 1.35(2), α=0.90(2), and z_2 = 2.032(3). While z_2 has attracted significant attention in the literature, we argue that for all practical purposes z_1 is more important, as it determines the number of statistically independent measurements during a long simulation. Critical dynamical behavior of the Ising model Nikolaos G. Fytas August 1, 2023 ============================================== § INTRODUCTION Universality is a key concept in statistical physics <cit.>. Phenomena which at a first glance seem completely unrelated, such as the liquid-gas phase transition and the ferromagnetic-paramagnetic phase transition in magnetic materials, belong to the same universality class, sharing the same set of critical exponents and other renormalization-group invariants that characterize their equilibrium behavior around the critical point <cit.>. The Ising model <cit.>, the simplest fruit-fly model in statistical physics which lends itself well for theory and simulation, is found to belong to the same universality class <cit.>. Studies of the critical equilibrium properties of the Ising model are therefore of direct experimental relevance <cit.>. The concepts of critical phenomena can fortunately be extended to dynamical processes – for a seminal review see Ref. <cit.>. However, while universality is well established for equilibrium properties, it is not clear in how far it also extends to dynamical properties <cit.>. As it is well-known, the onset of criticality is marked by a divergence of both the correlation length ξ and the correlation time τ. While the former divergence yields singularities in static quantities, the latter manifests itself notably as critical slowing down. To describe dynamical scaling properties, an additional exponent is required in addition to the static exponents. This so-called dynamic exponent z links the divergences of length and time scales, i.e., τ∼ξ^z <cit.>. In a finite system, ξ is bounded by the linear system size L, so that τ∼ L^z at the incipient critical point. The dynamic critical exponent z has been numerically computed to be z = 2.1665(12) at two dimensions by Nightingale and Blöte  <cit.>. Note the value z = 2.0245(15) at three dimensions <cit.>. In the current paper we attempt to extend our knowledge in the field by highlighting an overlooked aspect of dynamic critical phenomena using single spin-flip (Glauber) dynamics on the two- and three-dimensional Ising ferromagnet. In contrast to the standard belief that the dynamical critical behavior is characterized by a single dynamic exponent z, we provide numerical evidence that there is another dynamic critical exponent, considerably smaller than the most studied one, which appears to be of greater practical relevance. In particular, we provide a more refined description of the magnetization autocorrelation function featuring three regimes that are separated by two crossover times, namely τ_1∼ L^z_1 and τ_2∼ L^z_2, where z_1 is a newly identified dynamic exponent and z_2 the already well-known exponent <cit.>. The rest of the paper is laid out as follows: In Sec. <ref> we introduce the model and outline the numerical details of our implementation. In Sec. <ref> we introduce the key observables under study and elaborate on the analysis of the numerical data, placing our findings into context. Finally, in Sec. <ref> we critically summarize the main outcomes of this work in the framework of the current literature and also set an outlook for future studies. § MODEL AND NUMERICAL DETAILS We consider the nearest-neighbor, zero-field Ising model with Hamiltonian ℋ = -J ∑_⟨ i,j ⟩σ_i σ_j, where J > 0 indicates ferromagnetic interactions, σ_i = ± 1 denotes the spin on lattice site i, and ⟨…⟩ refers to summation over nearest neighbors only. Here, we study the two- and three-dimensional Ising model on the square (L × L) and simple cubic (L × L × L) lattices respectively, employing periodic boundary conditions. Many equilibrium properties of these models are known, especially at two dimensions where exact results are available, such as the location of the critical temperature, i.e., T_ c = 2 / ln( 1+√(2)) = 2.269185… <cit.>. For the three-dimensional model on the other hand, there is a wealth of high-accuracy estimates of critical parameters from various approximation methods, see Ref. <cit.> and references therein. One such prominent example is the value of the critical point T_ c = 4.511523…, recently proposed in Ref. <cit.> via large-scale numerical simulations. The Ising model is without doubt a prototypical model for studying dynamical properties. For this purpose, an elementary move is a proposed flip of a single spin at a random location, which is then accepted or rejected according to the Metropolis algorithm <cit.>. One unit of time then consists of N = L^2 elementary moves at two dimensions (similarly, N = L^3 at three dimensions). This dynamics is often referred to as Glauber dynamics <cit.>, even though Glauber originally used a slightly different acceptance probability. Other commonly used dynamical algorithms in the extensive literature are the spin-exchange (Kawasaki) dynamics <cit.>, as well as numerous types of cluster algorithms <cit.>. Yet, these are outside the scope of the current work. On the technical side, our numerical simulations of the Ising model were performed at the critical temperature <cit.> using single spin-flip dynamics and systems with linear sizes within the range L = {16 - 96} at two dimensions (accordingly, L ∈{10 - 40} at three dimensions). We note that the simulation time needed for a single realization on a node of a Dual Intel Xeon E5-2690 V4 processor was 1 hour for L = 96 at two dimensions. The analogous CPU time was 35 minutes for L = 40 at three dimensions. For each system size L, 10^4 - 10^5 independent realizations have been generated at both dimensions. § RESULTS AND ANALYSIS The two key observables that allow us to elaborate on some new aspects of the dynamical behavior of the Ising ferromagnet are based on the order parameter (magnetization) of the system M = ∑_i σ_i. The first is the mean-squared deviation of the magnetization MSD_M(t)=⟨ (Δ M(t))^2 ⟩ = ⟨ (M(t)-M(0))^2 ⟩, and the second the magnetization's autocorrelation function, defined as C_M(t)=⟨ M(t) · M(0) ⟩. We start the presentation with the two-dimensional Ising model and the raw numerical data, as shown in Fig. <ref>. In particular Fig. <ref>(a) depicts the MSD_M(t), whereas Fig. <ref>(b) the normalized autocorrelation Ĉ_M(t)=⟨ M(t)M(0) ⟩/ ⟨ M^2(0) ⟩, both as a function of time. Three distinct regimes can be identified, separated by two crossover correlation times, τ_1 and τ_2. At short times t, the dynamics consist of L^2 t proposed spin flips at spatially separated locations, of which a fraction f≈ 0.14 is accepted. The dynamics thus involve f L^2 t uncorrelated changes of Δ M=± 2. Consequently, MSD_M in the short-time regime is given by MSD_M=4f L^2 t (t ≪τ_1). At these short times, the magnetization does not have enough time to change significantly. Hence, it stays close to its value at t = 0. The expectation of the squared magnetization is related to the magnetic susceptibility <cit.> χ=β/L^2⟨ M^2 ⟩. Thus, in the short-time regime, C_M(t) ≈ k_b T L^2 χ∼ L^2+γ/ν (t ≪τ_1). Here, we used the equilibrium property χ∼ L^γ/ν. On the other hand, at very long times the two values of the magnetization are uncorrelated so that ⟨ M_t · M_0 ⟩ is small as compared to ⟨ M^2 ⟩. Hence we can derive that MSD_M saturates as follows MSD_M(t) = ⟨ M(t)^2 + M(0)^2 - 2 M(t)M(0) ⟩ ≈ 2⟨ M^2 ⟩≈ 2 k_b T L^2 χ. Rather than an operational procedure, the dynamics can also be formulated as the application of the transition matrix 𝒜 to a state vector S⃗. This is a rather unpractical formulation as 𝒜 is a sparse matrix of size 2^L^2× 2^L^2, but nevertheless useful for the sake of argument. This transition matrix has an eigenvalue of e_0 = 1, with an eigenvector in which each element lies the likelihood of that state (the Boltzmann distribution). It also has a second-highest eigenvalue e_1≈ 1, which determines the ultimate exponential decay of the autocorrelation. At long times t, the dynamical matrix is applied t L^2 times. Thus, expressed in A the dynamics can be written as C_M(t) =⟨S⃗_t 𝒜^tL^2S⃗_0 ⟩. For long times, the decay of the autocorrelation function is dominated by the largest non-zero eigenvector and eigenvalue C_M(t) ∼ e_1^tL^2∼exp[-t/τ_2], in which τ_2 = -L^2 ln(e_1). It is very hard to obtain τ_2 via e_1 numerically unless L is a very small number, but this provides a valid argument to show that the magnetization autocorrelation function will decay exponentially at long times for finite L. As it is natural, the intermediate regime has to connect the short- and long-time regimes monotonically. The numerical data suggest that this happens via anomalous diffusion, i.e., MSD_M ∼ t^α, whereas the autocorrelation function seems to decay as a stretched-exponential with the same anomalous exponent α. Clearly, the key quantities that we want to establish in this manuscript are the dynamic exponents z_1 and z_2, as well as the anomalous exponent α. To this end, we use the method of finite-size scaling <cit.>. Figure <ref> embodies the collapse of MSD_M(t) curves for the wide range of system sizes studied around the first transition point, obtained for z_1 = 0.45 ± 0.05. At the intermediate regime of this plot, the curve is expected to decay as ∼ t^α-1. Numerically, we estimate the anomalous exponent to be α = 0.752 ± 0.005. Figure <ref> now illustrates an analogous collapse of the curves for around the second transition point. This is attained by plotting -ln(C_M(t)/C_M(0))/(L^-z_2t) as a function of t/L^z_2, where z_2 = 2.1665 is set equal to the value for z as reported by Nightingale and Blöte <cit.>. The intermediate regime for MSD_M starts at time τ_1 ∼ L^z_1 at a value of ⟨ (Δ M)^2 ⟩∼ L^2+z_1, then increases following a power-law mode with an exponent α, until it reaches its saturation value ∼ L^2+γ/ν at time τ_2 ∼ L^z_2. Assuming a single power-law function in the intermediate regime, the anomalous exponent is expected to be α = (γ/ν-z_1)/(z_2 - z_1). Purely based on numerical findings, we speculate that z_1 = 1/2 and α = 3/4; in that case, we obtain from Eq. (<ref>) that z_2 = 13/6 = 2.1667 in excellent agreement with the most accurate numerical estimates <cit.>. To further corroborate on the main aftermath of our work, we undertook a parallel examination of the three-dimensional Ising ferromagnet. Analogously to the analysis sketched above for the two-dimensional Ising model, we obtained data collapses around the first and second crossover times. Figures <ref> - <ref> below summarize our main findings: Fig. <ref> exhibits the raw data, Fig. <ref> suggests that MSD_M(t)/(L^3 t) is a function of t/L^z_1 with z_1 = 1.35±0.02, and Fig.<ref> that -ln(Ĉ_M(t))/(L^-z_2t) is a function of t/L^z_2 with z_2 = 2.032 ± 0.003. Thus, as in two dimensions, the dynamical critical behavior features two crossover times characterized by two dynamic critical exponents. Additionally, the exponent of the intermediate anomalous diffusion α for the three-dimensional Ising ferromagnet is numerically found to be 0.90 ± 0.02. An overview of critical exponents reported in this manuscript is given in Tab. <ref>. § SUMMARY AND OUTLOOK We analyzed the results of extensive simulations of the two- and three-dimensional Ising model with Glauber dynamics. In particular, we scrutinized the mean-squared deviation and autocorrelation function of the magnetization, showcasing the existence of three dynamical regimes, separated by two crossover times at τ_1 ∼ L^z_1 and τ_2 ∼ L^z_2. In the short-time regime, the mean-squared deviation of the magnetization shows ordinary diffusive behavior and the autocorrelation function exponential decay. In the second intermediate regime the mean-squared deviation is characterized by anomalous diffusive behavior and the autocorrelation function decays as a stretched-exponential way. Finally, in the third late-time regime the mean-squared deviation saturates at a constant value while the autocorrelation function again decays exponentially. The second crossover to the exponential decay of the autocorrelation function has been extensively studied in the literature. Nightingale and Blöte reported that this exponential decay sets in at a time determined by the dynamic critical exponent z = 2.1665(12) <cit.>; this is in agreement with our estimate z_2 at the second crossover. To the best of our knowledge, the first crossover has not yet been reported or was assumed to occur at some fixed time (i.e., z_1 = 0) without substantiation. The simulations and analysis captured here clearly manifest the existence of this first crossover at a time governed by a new dynamic critical exponent z_1. We also postulated a speculative argument about the crossover times at two dimensions. Purely on numerical grounds, we suspect the first crossover to correspond to a dynamic exponent z_1 = 1/2, and the exponent of the anomalous diffusion to be α = 3/4. In this case, we showed that the second crossover is governed by the exponent z_2 = 13/6, in full agreement with the numerical result z = 2.1665(12). At this stage, the development of a solid theoretical argument supporting the presence of the numerically observed first crossover and the relevant dynamic and anomalous diffusion exponent z_1 and α respectively is called for. To sum up, we hope that the relevance of our work will be twofold: (i) On the practical side, for obtaining statistically uncorrelated samples the proper sampling frequency should be set by the newly reported exponent z_1: the correlation between consecutive samples which are separated by (multiples of) τ_1 ∼ L^z_1 has decayed in a stretched-exponential way to a value which is as small as one would want. (ii) On the theoretical side, the critical dynamical behavior of the Ising model with Glauber dynamics is much richer than reported till date featuring two distinct crossovers. Thus, if dynamic universality exists, it must also be much more substantial and needs further investigation. Closing, we would like to raise some motivational comments for future work. In a recent paper <cit.> it was shown that the ϕ^4 model with local dynamics appears to belong to the same dynamic universality class as the Ising model; this was done by probing numerically the dynamic critical exponent which was found to be z = 2.17(3). If indeed this is the case, then also the exponent z_1 should apply to the ϕ^4 model; see also Refs. <cit.> for extensive aspects on the dynamic Ising universality. Furthermore, in Ref. <cit.> the Ising model with Kawasaki dynamics was studied and the authors reported that the Fourier modes of the magnetization are in very close agreement with the dynamical eigenmodes, suggesting that z = 4 - η = 15/4. Investigating this aspect under the prism of the newly introduced exponent z_1 might be another intriguing continuation of our work <cit.>. We plan to pursue these and other relevant open questions in the near future. We acknowledge the provision of computing time on the parallel computer cluster Zeus of Coventry University and TÜBİTAK ULAKBİM (Turkish agency), High Performance and Grid Computing Center (TRUBA Resources). 199 fisher74 M.E. Fisher, Rev. Mod. Phys. 46, 597 (1974). fisher98 M.E. Fisher, Rev. Mod. Phys. 70, 653 (1998). Ising25 E. Ising, Z. Physik 31, 253 (1925). landau_book D. P. Landau and K. Binder, A Guide to Monte Carlo Simulations in Statistical Physics (Cambridge University Press, Cambridge, England, 2000). barkema_book M.E.J. Newman and G.T. Barkema, Monte Carlo methods in Statistical Physics (Clarendon Press, 1999). amit_book D.J. Amit and V. Martín-Mayor, Field Theory, the Renormalization Group and Critical Phenomena, 3rd ed. (World Scientific, Singapore, 2005). hohenberg77 P.C. Hohenberg and B.I. Halperin, Rev. Mod. Phys. 49, 435 (1977). folk06 R. Folk and G. Moser, J. Phys. A: Math. Gen. 39, R208 (2006). hasenbusch07 M. Hasenbusch, A. Pelissetto, and E. Vicari, J. Stat. Mech. (2007) P11009. zhong20 W. Zhong, G.T. Barkema, and D. Panja, Phys. Rev. E 102, 022132 (2020). nightingale96 M.P. Nightingale and H.W.J. Blöte, Phys. Rev. Lett. 76, 4548 (1996). hasenbusch20 M. Hasenbusch, Phys. Rev. E 101, 022126 (2020). onsager44 L. Onsager, Phys. Rev. 65, 117 (1944). kos16 F. Kos, D. Poland, D. Simmons-Duffin, and A. Vichi, J. High Energy Phys. 08 (2016) 036. ferrenberg18 A.M. Ferrenberg, J. Xu, and D.P. Landau, Phys. Rev. E 97, 043301 (2018). metropolis53 N. Metropolis, A.W. Rosenbluth, M.N. Rosenbluth, A.H. Teller, and E. Teller, J. Chem. Phys. 21, 1087 (1953). martinelli99 F. Martinelli, Lectures on Glauber Dynamics for Discrete Spin Models In: Bernard, P. (eds) Lectures on Probability Theory and Statistics. Lecture Notes in Mathematics, vol 1717. Springer, Berlin, Heidelberg). randall00 D. Randall and P. Tetali, J. Math. Phys. 41, 1598 (2000). coulon04 C. Coulon, R. Clérac, L. Lecren, W. Wernsdorfer, and H. Miyasaka, Phys. Rev. B 69, 132408 (2004). grandi96 B.C.S. Grandi and W. Figueiredo, Phys. Rev. E 53, 5484 (1996). smedt03 G. De Smedt and C. Godreche, Eur. Phys. J. B 32, 215 (2003). godreche04 C. Godreche, F. Krzakała, Florent, and F. Ricci-Tersenghi, J. Stat. Mech.: Theory and Exp. (2004) P04007. coddington92 P.D. Coddington, D. Paul, and C.F. Baillie, Phys. Rev. Lett. 68, 962 (1992). rieger99 H. Rieger and N. Kawashima, Eur. Phys. J. B 9, 233 (1999). bloete02 H.W.J. Blöte and Y. Deng, Phys. Rev. E 66, 066110 (2002). zhong18 W. Zhong, G.T. Barkema, D. Panja, and R.C. Ball, Phys. Rev. E 98, 062128 (2018). bloete95 H.W.J. Blöte, E. Luijten, and J.R. Heringa, J. Phys. A 28, 6289 (1995). bloete98 H.W.J. Blöte, M.P. Nightingale, Physica A 251, 211 (1998). hasenbusch99 M. Hasenbusch, K. Pinn, and S. Vinti, Phys. Rev. B 59, 11471 (1999). zhong19 W. Zhong, D. Panja, and G.T. Barkema, Phys. Rev. E 100, 012132 (2019). comment Of course the total magnetization (zero mode) is not a good observable in this case, as it is strictly conserved; so one has to study, for instance, the first Fourier mode of the magnetization.
http://arxiv.org/abs/2307.00375v1
20230701160908
Fractionality and PT-symmetry in an electrical transmission line
[ "Mario I. Molina" ]
nlin.PS
[ "nlin.PS" ]
http://arxiv.org/abs/2307.03186v1
20230706175840
TGRL: An Algorithm for Teacher Guided Reinforcement Learning
[ "Idan Shenfeld", "Zhang-Wei Hong", "Aviv Tamar", "Pulkit Agrawal" ]
cs.LG
[ "cs.LG" ]
[ TGRL: An Algorithm for Teacher Guided Reinforcement Learning Idan ShenfeldMIT Zhang-Wei HongMIT Aviv TamarTechnion Pulkit AgrawalMIT MITImprobable AI Lab, Massachusetts Institute of Technology, Cambridge, USA TechnionTechnion - Israel Institute of Technology, Haifa, Israel Idan [email protected] Machine Learning, ICML 0.3in ] Learning from rewards (i.e., reinforcement learning or RL) and learning to imitate a teacher (i.e., teacher-student learning) are two established approaches for solving sequential decision-making problems. To combine the benefits of these different forms of learning, it is common to train a policy to maximize a combination of reinforcement and teacher-student learning objectives. However, without a principled method to balance these objectives, prior work used heuristics and problem-specific hyperparameter searches to balance the two objectives. We present a principled approach, along with an approximate implementation for dynamically and automatically balancing when to follow the teacher and when to use rewards. The main idea is to adjust the importance of teacher supervision by comparing the agent's performance to the counterfactual scenario of the agent learning without teacher supervision and only from rewards. If using teacher supervision improves performance, the importance of teacher supervision is increased and otherwise it is decreased. Our method, Teacher Guided Reinforcement Learning (TGRL), outperforms strong baselines across diverse domains without hyper-parameter tuning. The code is available at https://sites.google.com/view/tgrl-paper/https://sites.google.com/view/tgrl-paper/ § INTRODUCTION In Reinforcement Learning (RL), an agent learns decision-making strategies by executing actions, receiving feedback in the form of rewards, and optimizing its behavior to maximize cumulative rewards. Such learning by trial-and-error can be challenging, particularly when rewards are sparse, or under partial observability <cit.>. A more data-efficient learning method is to directly supervise the agent with correct actions obtained by querying a teacher, as exemplified by the imitation learning algorithm called DAgger <cit.>. Learning to mimic a teacher is significantly more data-efficient than reinforcement learning because it avoids the need to explore the consequences of different actions. However, learning from a teacher can be problematic when the teacher is sub-optimal or when it's impossible to perfectly mimic the teacher. In the first problematic case of a sub-optimal teacher, because the agent attempts to mimic the teacher's actions perfectly, its performance is inherently limited by the teacher's performance. Developing methods for training agents that surpass their sub-optimal teachers is an active research area <cit.>. The second problem occurs when the agent is unable to mimic the teacher. It can happen in the common scenario when the teacher chooses actions based on privileged information unavailable to the agent. For example, the teacher may have access to additional sensors when training in simulation <cit.>, external knowledge bases <cit.>, or accurate state estimates during training <cit.>. In some scenarios, the agent can make up for the information gap with respect to the teacher by accumulating information from a history of observations <cit.>. However, in the most general scenario, just using the history is insufficient, and the agent must take information-gathering actions (i.e. explore) to acquire the information being used by the teacher before it can mimic it. However, since the teacher never performs information-gathering actions, the agent cannot learn such actions by mimicking the teacher. As an example, consider the "Tiger Door" environment illustrated in Figure <ref> <cit.>. The agent is placed in a maze with a goal cell (green), a trap cell (blue), and a button (pink). Reaching the goal and trap cells provide positive and negative rewards, respectively. The location of the goal and trap cells randomly switch locations every episode. The teacher is aware of the location of all cells, whereas the agent (or the student) cannot observe the goal/trap cell locations. Instead, the student can go to the pink button, an action that reveals the goal location. In this setup, the goal-aware teacher takes action to directly reach the goal. However, the student must deviate from the teacher's actions to reach the pink button – a behavior that cannot be learned by imitation. Consider the general scenario where the agent's optimality is measured by the rewards it accumulates. Both when the teacher is sub-optimal and when it cannot be mimicked, trying to imitate the teacher will result in sub-optimal policies. In these scenarios, a student with access to a reward function can benefit by jointly learning from both the reward and the teacher's supervision. Learning from rewards provides an incentive for the agent to deviate from a sub-optimal teacher to outperform it or carry out information gathering when learning from a privileged teacher. Thus, by combining both forms of learning, the agent can leverage the teacher's expertise to learn quickly but also try different actions to check if a better policy can be found. The balance between when to follow the teacher and use rewards is delicate and can substantially affect the performance (i.e., total accumulated rewards) of the learned policy. In the absence of a principled method to balance the two objectives, prior work resorted to task-specific hyperparameter tuning <cit.>. In this work, we present a principled solution to automatically balance learning from rewards and a teacher. Our main insight is that supervision from the teacher should only be used when it improves performance compared to learning solely from reward. To realize this, in addition to training the main policy that learns from both rewards and the teacher, we also train a second auxiliary policy that learns the task by only optimizing rewards using reinforcement learning. At every training step, our algorithm compares the two policies. If the main policy performs better, it indicates that utilizing the teacher is beneficial and the importance of learning from the teacher is increased. However, if the auxiliary policy performs better, the importance of the teacher's supervision in the main policy's objective is decreased. We call this algorithm for automatically adjusting the balance of imitation and RL objectives as Teacher Guided Reinforcement Learning (TGRL). We empirically evaluate TGRL on a range of tasks where learning solely from a teacher is inadequate and focus primarily on scenarios with a privileged teacher. The results show that TGRL is either comparable or outperforms existing approaches without the need for manual hyper-parameter tuning. The most challenging task we consider is robotic in-hand re-orientation of objects using only touch sensing. The superior performance of TGRL demonstrates its applicability to practical problems. Finally, we also present experiments showing the effectiveness of TGRL in learning from sub-optimal teachers. § PRELIMINARIES Reinforcement learning (RL). We consider the interaction between the agent and the environment as a discrete-time Partially Observable Markov Decision Process (POMDP) <cit.> consisting of state space 𝒮, observation space Ω, action space 𝒜, state transition function 𝒯: 𝒮×𝒜→Δ(𝒮), reward function R:𝒮×𝒜→ℝ, observation function 𝒪:𝒮→Δ(Ω), and initial state distribution ρ_0:Δ(𝒮). The environment is initialized at an initial state s_0∼ρ_0. At each timestep t, the agent observes the observation o_t ∼ O(·|s_t), o_t ∈Ω, takes action a_t determined by the policy π, receives reward r_t = R(s_t, a_t), transitions to the next state s_t+1∼𝒯(·|s_t, a_t), and observes the next observation o_t+1∼ O(·|s_t+1). The goal of RL <cit.> is to find the optimal policy π^* maximizing the expected cumulative rewards (i.e., expected return). Since the agent has access only to the observations and not to the underlying states, seminal work showed that the optimal policy may depend on the history of observations τ_t: {o_0, a_0, o_1,a_1...o_t}, and not only on the current observation o_t <cit.>. Our aim is finding the optimal policy π^*:τ→Δ (𝒜) that maximizes the following objective: π^* = _π J_R(π) := 𝔼[ ∑^∞_t=0γ^t r_t ]. Teacher-Student Learning (TSL). Suppose the agent (also referred to as the student in this paper) has access to a teacher that computes actions, a_t^T ∼π̅(· | o^T_t), using an observation space that may be different from the student, o^T_t ∼Õ(·|s_t); o^T_t ∈Ω^T. We are agnostic to how the teacher is constructed. In general, it's a black box that can be queried by the student for actions at any state the student encounters during training. Given such a π̅, we aim to train the student policy, π_θ(· | o_t), operating from observations, o_t ∈Ω. A straightforward way to train the student is to use supervised learning to match the teacher's actions <cit.>, max_θ𝔼_π̅logπ_θ(a_t^T | o_t), where o_t is the student's observation and a_t^T is the teacher's action computed from its observation, o_t^T, corresponding to the state s_t obtained by rolling out the teacher. However, recent work found that better student policies can be learned by using reinforcement learning to maximize the sum of rewards, where the reward is computed as the cross-entropy between the teacher and student's action distributions <cit.>. This leads to the following optimization problem: max_πJ_I(π) := max_π𝔼[ - ∑_t=0^H γ^t H^X_t(π|π̅)] where H^X_t(π|π̅)=-𝔼_a∼π(·|τ_t)[logπ̅(a|o^T_t)] is the Shannon cross-entropy, and for convenience in notation, π_θ is denoted as π. This objective is similar to DAgger <cit.> in optimizing the learning objective using the data collected by the student. Problems in Teacher-Student Learning. To understand the problems in TSL, consider the recent result that implies that a student trained with TSL learns the statistical average of the teacher's actions for each observable state o∈Ω: In the setting described above, denote π^TSL = _πJ_I(π) and f(o^T):Ω_T→Ω as the function that maps the teacher's observations to the student's observations. Then, for any o^T∈Ω_T with o=f(o^T), we have that π^TSL(o)=𝔼 [π̅(s)|o=f(o^T)]. See <cit.> proposition 1 or <cit.> theorem 1. The two problems due to Proposition <ref>: (i) Since the student's actions are the statistical average of the teacher's actions, it cannot outperform a sub-optimal teacher as there is no incentive to explore actions other than the teacher's. (ii) If the difference in observation spaces between the teacher and student is large, learning the statistical average can lead to sub-optimal performance. This is because the student cannot distinguish two different teachers' observations that appear identical in the student's observation space. As a result, the student policy does not mimic the teacher, but instead learns the average action, which can lead to sub-optimal performance (Eq. <ref>) <cit.>. For example, in the Tiger Door environment, the student will follow the teacher until the second intersection (where the corridor splits into two paths for the two possible goal locations). The teacher policy takes a left or right action depending on where the goal is. Because the student does not observe the goal, it will learn to mimic the teacher's policy by assigning equal probability to actions leading to either of the sides. This policy is sub-optimal since the student will reach the goal only in 50% of trials. § METHOD As Teacher-Student Learning can lead to a sub-optimal student, to outperform the teacher, the student needs to explore actions different from the teacher to find a better policy. We assume that the student has access to task rewards in addition to a teacher. This reward function can guide the exploratory process by determining when deviating from the teacher is fruitful. Following prior work <cit.>, we consider the scenario of the student learning from a combination of reinforcement (Equation <ref>) and teacher-student (Equation <ref>) learning objectives: max_πJ_R+I(π, α)=max_π𝔼[ ∑_t=0^H γ^t(r_t-α H^X_t(π|π̅))] where α is the balancing coefficient between the RL and imitation learning objectives. The joint objective can also be expressed as: J_R+I(π, α)=J_R(π)+α J_I(π). Here, J_I(π), can also be interpreted as a form of reward shaping <cit.>, where the agent is negatively rewarded for taking actions that differ from the teacher's action. As the balancing coefficient between the task reward and the teacher guidance, the value of α greatly impacts the algorithm's performance. A low α limits the guidance the student gets from the teacher, resulting in the usual challenges of learning solely from rewards. A high value of α can lead to excessive reliance on a sub-optimal teacher resulting in sub-optimal performance. Without a principled way to choose α, a common practice is to find the best value of α by conducting a separate and extensive hyperparameter search for every task <cit.>. Besides the inefficiency of such search, as the agent progresses on a task, the amount of guidance it needs from the teacher can vary. Therefore, a constant α may not be optimal throughout training. Usually, the amount of guidance the student needs diminishes along the training process, but the exact dynamics of this trade-off are task-dependent, and per-task tuning is tedious, undesirable, and often computationally infeasible. §.§ Teacher Guided Reinforcement Learning (TGRL) Our notion of an optimal policy is one that achieves maximum cumulative task reward, and reinforcement learning optimizes this objective directly. Therefore, the teacher's supervision should only be used when it helps achieve better performance than just using task rewards. This idea is implemented by adding the following constraint: the performance of the policy learning from both rewards and teacher (i.e., the main policy) must be at par or outperform a policy trained using only task rewards (i.e., the auxiliary policy). Hence, our optimization problem becomes: max _πJ_R+I(π, α) s.t. J_R(π) ≥ J_R() where is the auxiliary policy trained only using task reward (Eq. <ref>). Overall, our algorithm iterates between improving the auxiliary policy by solving max _J_R() and solving the constrained problem in Equation <ref>. A recent paper <cit.> used a similar constraint in another context, to balance between exploration and exploitation in conventional RL. More formally, for i=1,2,… we iterate between two stages: * Partially solving ^i= _J_R() to get an updated estimate J_R(^i). * Solving the i^th optimization problem: max _πJ_R+I(π, α) subject to J_R(π) ≥ J_R(^i) The constrained optimization problem in Equation <ref> is solved using the dual lagrangian method, which has worked well in the reinforcement learning <cit.>. Using the Lagrange duality, we transform the constrained problem into an unconstrained min-max optimization problem. The dual problem corresponding to the primal problem in Equation <ref> is: min_λ≥ 0max_π[ J_R+I(π, α) +λ( J_R(π) - J_R() ) ] = min_λ≥ 0max_π[ (1+λ) J_R+I(π, α/1+λ) - λ J_R() ] Where λ is the Lagrange multiplier. Full derivation can be found in appendix <ref>. The resulting unconstrained optimization problem is comprised of two optimization problems. The first optimization problem (i.e., the inner loop) solves for π. Since J_R() is independent of π, this optimization is akin to solving the combined objective of Equation <ref> but with the importance of the imitation learning reward set to α/1+λ. Further, for λ≥0, we have α≥α/1+λ≥ 0, which means that α is the upper bound on the importance of imitation rewards. We also refer to the importance of imitation rewards,α/1+λ, as the balancing coefficient. The second stage involves solving for λ. The dual function is always convex since it is the point-wise minimum of a linear function in λ <cit.>. Therefore it can be solved with gradient descent without worrying about local minimas. The gradient of Equation <ref> with respect to the lagrange multiplier, λ, leads to the following update rule: λ_new = λ_old - μ [J_R(π) - J_R()] Where μ is the step size for updating λ. See appendix <ref> for full derivation. This update rule is intuitive: If the policy using the teacher (π) achieves more task reward than the auxiliary policy () trained without the teacher, then λ is decreased, which in turn increasesα/1+λ, making the optimization of π more reliant on the teacher in the next iteration. Otherwise, if achieves a higher reward than π, then increase in λ decreases the importance of the teacher. When utilizing Lagrange duality to solve a constrained optimization problem, it is necessary to consider the duality gap which is the difference between the optimal dual and primal values. A non-zero duality gap implies that the solution of the dual problem is only a lower bound to the primal problem and does not necessarily provide the exact solution <cit.>. Under certain assumptions listed in proposition <ref>, we show that for our optimization problem, there is no duality gap (proof in Appendix <ref>). Thus, solving the dual problem also solves the primal problem. Denote η_i=J_R(^i). Suppose that the rewards function r(s,a) and the cross-entropy term H^X(π|π̅) are bounded. Then for every η_i∈ℝ the primal and dual problems described in Eq. <ref> and Eq. <ref> have no duality gap. Moreover, if the series {η_i}_i=1^∞ converges, then there is no duality gap in the limit. Notice that in the general case, the cross-entropy term can reach infinity when the support of the policies does not completely overlap, violating the assumption of H^X(π|π̅) being bounded. As a remedy, we clip the value of the cross-entropy term in our implementation of TGRL. §.§ Implementation Off-policy approach: We implemented our algorithm using an off-policy actor-critic approach. Off-policy learning allows data collected by both policies, π and , to be stored in a common replay buffer used for training both policies. Our objective is to maximize the joint Q-value: Q_R+I=Q_R+α/1+λQ_I, where Q_R, Q_I denote the Q-value of actions with respect to the task (Equation <ref>) and imitation (Equation <ref>) rewards respectively. Instead of directly learning, Q_R+I, we train two critic networks, Q_R and Q_I, and combine their values to estimate Q_R+I. This choice enables us to estimate Q_R+I for different values of λ without any need for re-training the critics. We also represent π and with separate actor networks optimized to maximize the corresponding Q-values. In the data collection step, half of the trajectories are collected using π and the other half using π_R. See Algorithm <ref> for an outline of our method and Appendix <ref> for further details. Estimating the performance difference: As shown in Equation <ref>, the gradient of the dual problem with respect to λ is the performance difference between the two policies, J_R(π) - J_R(). To estimate the performance difference, one option is to perform Monte-Carlo estimation – i.e., rollout trajectories using both policies and determine the empirical estimate of cumulative rewards. However, a good estimate of cumulative rewards requires collecting a large number of trajectories which would make our method data inefficient. Another data-efficient option is to reuse data in the replay buffer for estimating the performance difference. Because the data in the replay buffer was not collected using the current policies, we make the off-policy correction using approximations obtained by extending prior results from <cit.> known as the objective difference lemma to the off-policy case: Let ρ(s,a,t) be the distribution of states, actions, and timesteps currently in the replay buffer. Then the following is an unbiased approximation of the performance difference: J_R(π) - J_R()= 𝔼_(s,a,t)∼ρ[γ^t(A_(s,a)-A_π(s,a))] Another challenge in estimating the gradient of λ (i.e., the performance difference between the student and the teacher policies) is the variability in the scale of the policies' performance across different environments and during training. This makes it difficult to determine an appropriate learning rate for the weighting factor λ, which will work effectively in all settings. To address this issue, we found it necessary to normalize the performance difference value during the training process. This normalization allows us to use a fixed learning rate across all of our experiments. § EXPERIMENTS We perform four sets of experiments. In Sec. <ref>, we provide a comparison to previous work in cases where the teacher is too good to mimic. In Sec. <ref> we solve an object reorientation problem with tactile sensors, a difficult partial observable task that both RL and TSL struggle to solve. In Sec. <ref> we look into the ability of the TGRL agent to surpass the teacher's performance. Finally, in Sec. <ref> we do ablations of our own method to show the contribution of individual components. §.§ TGRL performs well, without a need for hyperparameter tuning We provide empirical evidence (1) showcasing the robustness of TGRL to choice of hyperparameters controlling the update of the balancing coefficient and (2) comparison to prior work. We compare TGRL to the following: TSL. A pure Teacher-Student Learning approach that optimizes only Equation <ref>. COSIL <cit.>. This algorithm also uses entropy-augmented RL (Eq. <ref>) to combine the task reward and the teacher's guidance. To adjust the balancing coefficient α, they propose an update rule for maintaining a fixed distance (D̅) between the student's and teacher's policies by minimizing α(J_I(π)-D̅) using gradient descent. Choosing the right value of D̅ is a challenge since its unknown apriori how similar the student and the teacher should be. Moreover, D̅ can change drastically between environments, depending on the action space support. To tackle this issue, we run a hyperparameter search with N=8 different values of D̅ and report performance for the best hyperparameter per task (COSIL_best) and average performance across hyperparameters (COSIL_avg). ADVISOR-off. An off-policy version of the algorithm from <cit.> that uses a state-dependent balancing coefficient. First, an imitation policy is trained using only teacher-student learning loss. Then, for every state, the action distribution of the teacher policy is compared against the imitation policy. The states in which the two policies disagree are deemed to be ones where there is an information gap. For such states, the teacher is trusted less and more importance is given to the task reward. PBRS <cit.>. A potential-based reward shaping (PBRS) method based on <cit.> to mitigate issues with Teacher-Student Learning. PBRS uses a given value function V(s) to assign higher rewards to more beneficial states, which can lead the agent to trajectories favored by the policy associated with that value function: r_new=r_task+γ V(s_t+1)-V(s_t) where r_task is the task reward. Since their algorithm is on-policy, for fair comparison, we created an off-policy version of this method. For this, first, we train an imitation policy by minimizing only the teacher-student learning loss (Eq. <ref>). Then, we train a neural network to represent the value function of this imitation policy. Using this value function, we obtain an augmented rewards function described in Equation <ref>, which is then used to train a policy using the SAC algorithm <cit.>. Experimental Domains. We experiment on diverse problems taken from prior works studying issues in Teacher-Student Learning spanning discrete and continuous action spaces, and both proprioceptive and pixel observations. For a description of each environment, see appendix <ref>. For a fair comparison, we used the same code and Q-learning hyperparameters for all algorithms, tuning only the hyperparameters involved in balancing the teacher supervision against the task reward. The Q-learning hyperparameters correspond to hyperparameters of the RL algorithm chosen from the best-performing SAC agent. For TGRL we only used a single value for the initial coefficient λ_init and coefficient learning rate μ for all tasks (more details in Appendix <ref>). Comparison to Baselines. Results in Figure <ref> show that while each baseline method succeeds in some sub-set of tasks, no baseline method is effective on all tasks. In contrast, TGRL, solves all tasks successfully with data efficiency comparable to the best baseline in each task. Most importantly, TGRL requires no task-specific hyperparameter tuning. Notice that COSIL demonstrates comparable performance on three out of four tasks when its hyperparameters are carefully tuned (i.e., COSIL_best). However, the average performance across all hyperparameters (i.e., COSIL_avg) is significantly lower. This highlights sensitivity of COSIL to the choice of hyperparameters. While PBRS does not require hyperparameter tuning, consistent with results from another work <cit.>, it converges slower than other teacher-student methods and doesn't consistently perform well across tasks. ADVISOR achieves good performance on Lava Crossing and Light-Dark Ant tasks but converged to a sub-optimal policy achieving comparable performance to Teacher-Student Learning (TSL) on Tiger Door and Memory environments. The sub-optimal performance is due to a fundamental limitation of the ADVISOR algorithm. As a reminder, ADVISOR works by identifying states where the student has insufficient information to follow the teacher. For such states, instead of imitating the teacher, task rewards are used to decide the action. In the Tiger Door environment (see Figure <ref>), the student has sufficient information to follow the teacher until the state at which the two arms of the environment split. However, this is too late for the student should deviate from the teacher – to achieve optimal performance, the student should have deviated from the teacher earlier to goto the pink button. This example illustrates a problem that ADVISOR encounters in environments where the information-gathering actions deviating from the teacher need to be performed before encountering the state at which the student cannot imitate the teacher. Robustness to Hyperparameters. To demonstrate the robustness of the choice of λ_init, we experimented with different values on Lava Crossing environment. The results in Figure <ref> (left) shows that irrespective of the choice of λ_init, TGRL achieves the same asymptotic performance. This indicates that TGRL can effectively adjust λ, regardless of its initial value, λ_init. §.§ TGRL can solve difficult environments with significant partial observability. To investigate the performance of our method on a more practical task with severe partial observability, we experimented with the Shadow hand test on task of re-orienting a pen to a target pose using only touch sensors and proprioceptive information <cit.>. Consider a Teacher-Student setup where the teacher policy observes the pen's pose and velocity. The student, however, only has access to an array of tactile sensors located on the palm of the hand and the phalanges of the fingers. To solve the task, the student needs to move his fingers along the pen and use the reading of these sensors to infer its pose. The teacher does not need to take these information-gathering actions. Thus, just mimicking the teacher will result in a sub-optimal student. To train all agents, the reward was set to the negative of the distance between the current pen's pose and the goal. The pen has rotational symmetry around the z axis, so the distance was computed only over rotations around the x and y axes. A trajectory was considered successful if the pen reached the goal orientation within 0.1 radians of the goal pose. The performance is averaged over 1,000 randomly sampled initial and goal poses. The results are in Figure <ref>. First, to assess the difficulty of the task, we report the results of an RL agent trained with Soft Actor-Critic and Hindsight Experience Replay (HER) <cit.> over the student's observation space. This RL agent has achieved a 47% success rate, demonstrating the difficulty of learning this task using RL alone. The teacher, trained also using SAC and HER but on the full state space, achieved a 78% success rate. Performing vanilla Teacher-Student learning using this teacher resulted in an agent with a 54% success rate. This performance gap shows that just imitating the teacher is not sufficient, and a deviation from the teacher's action is indeed required to learn a good policy. With TGRL, the agent achieved a significantly higher success rate of 73%. These results demonstrate the usefulness of our algorithm and its ability to use the teacher's guidance while learning from the reward at the same time. TGRL also outperforms baseline methods (see Appendix  <ref> for more details). §.§ TGRL can surpass the Teacher's peformance To evaluate the ability of TGRL to surpass a sub-optimal teacher, we conducted experiments in several domains. For the Tiger Door and Lava Crossing environments, we constructed teachers with different optimality levels ranging from 40% to 100% success rate. Results in Table <ref> show that even with a sub-optimal teacher, TGRL learns the optimal policy in the Tiger Door environment. Lava Crossing is a more challenging task, where vanilla SAC achieves 0% success rate. Therefore, combining learning from task reward and teacher supervision allows TGRL to achieve better performance than the sub-optimal teacher, but still not 100% success rate. In addition, experimented with a variant of the Shadow Hand environment, where both student and teacher have access to the full state, but the teacher is sub-optimal. The results depicted in Figure <ref> show that TGRL converges fast to the teacher's performance but than able to keep improving by utilizing task reward supervision, eventually exceeding the teacher's performance. §.§ Ablations Joint versus separate replay buffer. We empirically found that having a joint replay buffer between the two policies, π and , is necessary for good performance. In Figure <ref>, we compare the performance of our method with separate and joint replay buffers for the two policies on Light-Dark Ant environment. As a reminder, the auxiliary policy () limits the set of feasible policies in Equation <ref>. In tasks where it is hard to learn a good policy using only task rewards, the performance of will be bad leading to a loose constraint which will be ineffective. Combining the replay buffer allows to learn from trajectories collected by the main policy (π), thus enabling it to achieve better performance. This, in turn, leads to a stricter constraint on the main policy, pushing it to achieve better performance. Fixed versus adaptive balancing coefficient. A benefit of TGRL is that the balancing coefficient in the combined objective (Equation <ref>) dynamically changes during the training process based on the value of λ. To investigate if an adaptive coefficient is indeed beneficial, we conducted an ablation study wherein we trained policies in the Shadow Hand environment with fixed coefficients. Figure <ref> shows that the balancing coefficient of TGRL changes during training (left plot). At the start of training, the value is high, indicating that the teacher is given high importance. As the agent learns, the value decreases, indicating that learning from rewards is given more importance in the later stages of training. The results in the right plot of Figure <ref> show that TGRL with a dynamically changing balancing coefficient outperforms ablated versions with fixed coefficients. This result indicates that TGRL goes beyond mitigating the need for searching the balancing coefficient – it also outperforms a fixed balancing coefficient found by rigorous hyperparameter search. § DISCUSSION While TGRL improved performance across all tasks, it has its limitations. If the agent needs to deviate substantially from the teacher, then intermediate policies during learning might have worse performance than the teacher before the agent is able to leverage rewards to improve performance. In such a case, the imitation learning policy is a local minimum, overcoming which may require additional exploration incentives <cit.>. Second, for the constraint in Equation <ref> to be meaningful, trained only with task rewards should achieve reasonable performance. While having a shared replay buffer with π may help in some hard exploration problems, learning of can fail which would make the constraint ineffective. An interesting investigation that we leave to future work is to have a state-dependent balancing coefficient. As the difference between the teacher's and student's actions can be state-dependent, such flexibility can accelerate convergence and lead to better performance. § ACKNOWLEDGEMENTS We thank the members of the Improbable AI lab for the helpful discussions and feedback on the paper. We are grateful to MIT Supercloud and the Lincoln Laboratory Supercomputing Center for providing HPC resources. The research was supported in part by the MIT-IBM Watson AI Lab, Hyundai Motor Company, DARPA Machine Common Sense Program, and ONR MURI under Grant Number N00014-22-1-2740. § AUTHOR CONTRIBUTIONS Idan Shenfeld Identified the current problem with teacher-student algorithms, developed the TGRL algorithm, derived the theoretical results, conducted the experiments, and wrote the paper. Zhang-Wei Hong helped in the debugging, implementation and the choice of necessary experiments and ablations. Aviv Tamar helped derive the theoretical results and provided feedback on the writing. Pulkit Agrawal oversaw the project. He was involved in the technical formulation, research discussions, paper writing, overall advising, and positioning of the work. § DERIVATIONS AND PROOFS §.§ Derivation of the Dual Problem Denote η_i=J_R(π^i_RL), and given the Primal Problem we derived in Eq. <ref>: max _πJ_R+I(π, α) subject to J_R(π) ≥η_i The corresponding Lagrangian is: ℒ(π, λ) = J_R+I(π, α) +λ( J_R(π) - η_i) = 𝔼_π[ ∑_t=0^∞γ^t(r_t-α H^X_t(π|π̅))] + λ𝔼_π[ ∑_t=0^∞γ^tr_t] - λη_i = 𝔼_π[ ∑_t=0^∞γ^t ( (1+λ)r_t-α H^X_t(π|π̅) ) ] - λη_i = 𝔼_π[ (1+λ)∑_t=0^∞γ^t(r_t-α/1+λH^X_t(π|π̅))] - λη_i = (1+λ) J_R+I(π, α/1+λ) - λη_i And therefore out Dual problem is: min_λ≥ 0max_π[ (1+λ) J_R+I(π, α/1+λ) - λη_i ] §.§ Derivation of update rule for λ The gradient of the dual problem with respect to λ is: ∇_λ[ (1+λ) J_R+I(π, α/1+λ) - λη_i ] = ∇_λ[𝔼_π[ ∑_t=0^∞γ^t ( (1+λ)r_t-α H^X_t(π|π̅) ) ] - λη_i ] = 𝔼_π[ ∑_t=0^∞γ^t r_t ] - η_i = J_R(π) - η_i §.§ Duality Gap - Proof for Proposition <ref> We start by restating our assumptions and discuss why they hold for our problem: The rewards function r(s,a) and the cross-entropy term H^X(π|π̅) are bounded. Justification for A.1. This is achieved by using a clipped version of the cross entropy term. We will add that we found the clipping helpful in practice since it stops this term from reaching infinity when the support of the teacher and the student action distributions are not the same. The sequence {η_i}_i=1^∞ is monotincally increasing and converging, i.e., there exist η∈ℝ such that lim_i→∞η_i = η. Justification for A.2. We will remind that the sequence {η_i}_i=1^∞ is the result of incrementally solving max _π_RJ_R(π_R). Having this sequence be monotonically increasing is equivalent to a guarantee for policy improvement in each optimization step, an attribute of several RL algorithms such as Q-learning or policy gradient <cit.>. Regarding convergence, since the reward is upper bound from assumption <ref>, then we have an upper bounded monotonically increasing sequence of real numbers, which is proved to converge. There exist ϵ>0 such that for all i, the value of η_i is at most J_R(π^*)-ϵ. Justification for A.3. This assumption is equivelant to stating that J_R(π^*)-J_R(π_R)>0, meaning that π_R is never optimal. Without further assumption on the algorithm used to optimize π_R, we can not guarantee that this will not happen. However, if it happens, it means that we were able to find the optimal policy, and therefore there is no need to continue with the optimization procedure. As a remedy, we will define a new sequence {η̃_i}_i=1^∞ where η̃_i = η_i-ϵ and will use it instead of the original η_i. Since ϵ can be as small as we want, its effect on the algorithm is negligible and it served mainly for the completeness of our theory. Before going into our proof, we will cite Theorem 1 of <cit.>, which is the basis of our results: Given the following optimization problem: P^* = max_π𝔼_π[ ∑_i=0^H γ^t r_0(s_t,a_t)] subject to 𝔼_π[ ∑_i=0^H γ^t r_i(s_t,a_t)] ≥ c_i, i=1...m, And its Dual form: D^* = min_λ≥ 0max_π𝔼_π[ ∑_i=0^H γ^t r_0(s_t,a_t)] + λ∑_i=1^m[ 𝔼_π[ ∑_i=0^H γ^t r_i(s_t,a_t)] - c_i] suppose that r_i is bounded for all i = 0, . . . , m and that Slater’s condition holds. Then, strong duality holds, i.e., P^* = D^*. Having stated that, we will move to prove the two parts of our proposition: Given assumption <ref> and <ref>, for every η_i∈ℝ, the constrained optimization problem Eq. <ref> and its dual problem defined in Eq. <ref> do not have a duality gap. We align our problem with Theorem <ref> notations by denote as follows: r_0:r_t-α H^X_t, r_1:r_t, c_1:η_i And we can see that our problem is a specific case of the optimization problem defined above. For every η_i, there is a set feasible solutions in the form of an ϵ-neighborhood of π^*. This holds since J_R(π^*)>J_R(π)-ϵ for every π∉π^*. Therefore, Slater’s condition holds as it required that the feasible solution set will have an interior point. Together with assumption <ref>, we have all that we need to claim that Theorem <ref> applies to our problem. Therefore, there is no duality gap. Given all our assumptions, the constrained optimization problem at the limit: max _πJ_R+I(π, α) subject to J_R(π) ≥η has no duality gap. Our proof will be based on the Fenchel-Moreau theorem <cit.> that states: If (i) Slater’s condition holds for the primal problem and (ii) its perturbation function P(ξ) is concave, then strong duality holds for the primal and dual problems. Denote η_lim the limit of the sequence. Without loss of generality, we assume that η_lim=J_R(π^*)-ϵ. If not, we will just adjust ϵ accordingly. As in the last proof, Slater’s condition holds since there is a set of feasible policies for the problem. Regarding the second requirement, the sequence of perturbation functions for our problem is: P(ξ)=lim _i→∞ P_i(ξ) where P_i(ξ)= max _πJ_R+I(π, α) subject to J_R(π) ≥η_i+ξ Notice that this is a scalar function since P_i(ξ) is the maximum objective itself, not the policy that induces it. We will now prove that this sequence of functions converges point-wise: * For all ξ>ϵ we claim that P(ξ)=lim _i→∞ P_i(ξ)=-∞. As a reminder η_i converged to J_R(π^*)-ϵ. It means that there exists N such that for all n>N , we have |η_n-J_R(π^*)+ϵ|<ξ/2-ϵ. Moreover, since J_R(π^*)-ϵ is also the upper bound on the series of η_i we can remove the absolute value and get: 0≤ J_R(π^*)-ϵ-η_n<ξ/2-ϵ This yields the following constraint: J_R(π_θ) ≥η_n+ξ > J_R(π^*) - ξ/2+ξ = J_R(π^*) + ξ/2 But since ξ>ϵ>0 and π^* is the optimal policy, no policies are feasible for this constraint, so from the definition of the perturbation function, we have P_n(ξ)=-∞. This holds for all n>N and, therefore also lim _i→∞ P_i(ξ)=-∞. * For all ξ≤ϵ we will prove convergence to a fixed value. First, we claim that the perturbation function has a lower bound. This is true since the reward function and the cross-entropy are bounded, and the perturbation function value is a discounted sum of them. In addition, the sequence of P_i(ξ) is monotonically decreasing. To see it, remember that the sequence {η_i}_i=1^∞ is monotonically increasing. Since J_R(π) is also upper bounded by J_R(π^*), then the feasible set of the (i+1) problem is a subset of the feasible set of the i problem, and all those which came before. Therefore if the solution to the i problem is still feasible it will also be the solution to the i+1 problem. If not, then it has a lower objective (since it was also feasible in the i problem), resulting in a monotonically decreasing sequence. Finally, for every η_i there is at least one feasible solution, J_R(π^*), meaning the perturbation function has a real value. To conclude, {P_i(ξ)}_i=1^∞ is a monotonically decreasing, lower-bounded sequence in ℝ in therefore it converged. After we established point-wise convergence to a function P(ξ), all that remain is to proof that this function is concave. According to proposition <ref>, each optimization problem doesn’t have a duality gap, meaning its perturbation function is concave. Since every function in the sequence is concave, and there is pointwise convergence, P(ξ) is also concave. To conclude, from the Fenchel-Moreau theorem, our optimization problem doesn’t have a duality gap in the limit. §.§ Performance Difference Estimation - Proof for Proposition 3 Proposition: Let ρ(s,a,t) be the distribution of states, actions, and timesteps currently in the replay buffer. Then the following is an unbiased approximation of the performance difference: J_R(π) - J_R(π_R)= 𝔼_(s,a,t)∼ρ[γ^t(A_π_R(s,a)-A_π(s,a))] Proof: Let π_RB be the behavioral policy induced by the data currently in the replay buffer, meaning: ∀ s∈ S π_RB(a|s)=∑_a'∈RB(s)I_a'=a/∑_a'∈RB(s)1 Using lemma 6.1 from <cit.>, for every two policies π and π̃ We can write: η(π̃)-η(π) =η(π̃)-η(π_RB)+η(π_RB)-η(π)= -[η(π_RB)-η(π̃)]+η(π_RB)-η(π)= -∑_s∑_t=0^∞γ^tP(s_t=s|π_RB)∑_aπ_RB(a|s)A_π̃(s,a)+ ∑_s∑_t=0^∞γ^tP(s_t=s|π_RB)∑_aπ_RB(a|s)A_π(s,a)= ∑_s∑_t=0^∞γ^tP(s_t=s|π_RB)∑_aπ_RB(a|s)[A_π(s,a)-A_π̃(s,a)] Assuming we can sample tuples of (s, a, t) from our replay buffer and denote this distribution ρ_RB(s,a,t) we can write the above equation as: η(π̃)-η(π)=∑_s,a,tρ_RB(s,a,t)γ^t[A_π(s,a)-A_π̃(s,a)] Which we can approximate by sampling such tuples from the replay buffer. § EXPERIMENTAL DETAILS In this section, we outline our environment, training process and hyperparameters. Environment details. The following list contain details about all the environment used to test our algorithm and compare it to the baselines. Tiger Door. A robot must navigate to the goal cell (green), without touching the failure cell (blue). The cells, however, randomly switch locations every episode, and their nature is not observed by the agent. The maze also includes a pink button that reveals the correct goal location. Pixel observations with discrete action space. Lava Crossing. A minigrid environment where the agent starts in the top-left corner and needs to navigate through a maze of lava in order to get to the bottom-right corner. The episode ends in failure if the agent steps on the lava. The teacher has access to the whole map, while the student only sees a patch of 5x5 cells in front of it. Pixel observations with discrete action space. Memory. A minigrid environment. The agent starts in a corridor containing two objects. It then has to go to a nearby room containing a third object, similar to one of the previous two. The agent's goal is to go back and touch the object it saw in the room. The episode ends in success if the agent goes to the correct object and in failure otherwise. While the student has to go to the room to see which object is the current one, the teacher starts with that knowledge and can go to it directly. Pixel observations with discrete action space. Light-Dark Ant. A Mujoco Ant environment with a fixed goal and a random starting position. The starting position and the goal are located at the "dark" side of the room, where the agent has access only to a noisy measurement of its current location. It has to take a detour through the "lighted" side of the room, where the noise is reduced significantly, enabling it to understand its location. On the other hand, the teacher has access to its precise location at all times, enabling it to go directly to the goal. This environment is inspired by a popular POMDP benchmark  <cit.>. Proprioceptive observation with continuous action space. Training process. Our algorithm optimizes two policies, π, and π_R, using off-policy Q-learning. The algorithm itself is orthogonal to the exact details of how to perform this optimization. For the discrete Gridworld domains (Tiger Door, Memory and Lava Crossing), we used DQN <cit.> with soft target network updates, as proposed by <cit.>, which has shown to improve the stability of learning. For the rest of the continuous domains, we used SAC <cit.> with the architectures of the actor and critic chosen similarly and with a fixed entropy coefficient. For both DQN and SAC, we set the soft target update parameter to 0.005. As was mentioned in the paper, we represent the Q function using to separate networks, one for estimating Q_R and another for estimating Q_E. When updating a Q function, it has to be done with respect to some policy. We found that doing so with respect to policy π yields stable performance across all environments. For Tiger Door, Memory, and Lava Crossing, the teacher is a shortest-path algorithm executed over the grid map. For Light-Dark Ant, the teacher is a policy trained using RL over the teacher's observation space until achieving a success rate of 100%. In all of our experiments, we average performance over 5 random seeds and present the mean and 95% confidence interval. For all proprioceptive domains, we used a similar architecture across all algorithms. The architecture includes two fully-connected (FC) layers for embedding the past observations and actions separately. These embeddings are then passed through a Long Short-Term Memory (LSTM) layer to aggregate the inputs across the temporal domain. Additionally, the current observation is embedded using an FC layer and concatenated with the output of the LSTM. The concatenated representation is then passed through another fully-connected network with two hidden layers, which outputs the action. The architecture for pixel-based observations are the same, with the observations encoded by a Convolutional Neural Network (CNN) instead of FC. The number of neurons in each layer is determined by the specific domain. The rest of the hyperparameters used for training the agents are summarized in <ref>. Our implementation is based on the code released by <cit.>. Fair Hyperparameter Tuning. We attempt to ensure that comparisons to baselines are fair. In particular, as part of our claim that our algorithm is more robust to the choice of its hyperparameters, we took the following steps. First, we re-implemented all baselines, and while conducting experiments, maintained consistent joint hyperparameters across the various algorithms. Second, all the experiments of our own algorithm, TGRL, used the same hyperparameters. We used α=3, initial λ equal to 9 (and so the effective coefficient α/1+λ=0.3) and coefficient learning rate of 3e-3. Finally, for every one of the baselines we performed for each environment a search over all the algorithm-specific hyperparameters with N=8 different values for each one and report the best results (besides for COSIL, where we also report the average performance across hyperparameters). § ADDITIONAL RESULTS Here we record additional results that were summarized or deferred in Section 4. In particular: Environments without information differences. Determining if the information difference between the teacher and the student in a given environment will lead to a sub-optimal student is a complex task, as it is dependent on the specific task and the observations available to the agent, which can vary significantly across different environments. As such, it can be challenging to know beforehand if this problem exists or not. In the following experiment, we demonstrate that even in scenarios where this problem does not exist, the use of our proposed TGRL algorithm yields results that are comparable to those obtained using traditional Teacher-Student Learning (TSL) methods, which are typically considered the best approach in such scenarios. This highlights the robustness and versatility of our proposed approach. The experiment includes three classic POMDP environments from <cit.>. These environments are a version of the Mujoco Hopper, Walker2D, and HalfCheetah environments, where the agent only have access to the joint positions but not to their velocities. The teacher, however, has access to both positions and velocities. As can be seen in Figure <ref>, TGRL converges a bit slower than TSL but still manage to converge to the teacher's performance. Full training curves for Shadow Hand experiments. In Figure <ref>, we provide the full version of the training curves that appears in Figure <ref>.